Google shares developer preview of TensorFlow Lite
One of the major
announcements from I/O 2017 was TensorFLow Lite created for machine learning on
mobile devices. Developers were already informed about the launch of TensorFlow
in May itself. TensorFlow was in use for many devices starting right from the
servers to loT devices and varied platforms, but the demand of adapting machine
learning models have increased in the past which has lead to an increase in the
need of deploying them on mobile embedded devices. On device machine learning
models, TensorFlow Lite enables low-latency inference. With an aim of creating
lightweight machine learning solution for smartphone and embedded devices the
company came with the invention of TensorFLow for both Android and iOS
developers.
More emphasis will be laid
on introducing low-latency inference from machine learning models to less
robust devices and not on training models. To put it in simple words,
TensorFlow will ensure that existing capabilities of models are being applied
to the new data.
TensorFlow has been effectively trained
for:
- Inception
v3 which is image recognition model offering higher accuracy and also
larger size.
- MobileNet which has got the capability of
identifying 1000 different object classes and have been designed for mobile and
embedded devices.
- Smart
Reply is on- device conversational model which enables one-touch
reply to the incoming chat messages.
Google has mentioned that
while they were designing TF Lite they laid much emphasis on lightweight
product as that can help in initializing quickly and will lead to an
improvement in the model on various mobile devices.
While redesigning TensorFlow Lite from the
scratch focus was majorly laid on 3 areas:
- Cross
Platform, where in order to run many different platforms including
both Android and iOS runtime was designed.
- Lightweight enables inference of on-device machine learning models to be developed with a
small binary size featured with fast startup.
- Fast - To improve the model loading times and so as to support hardware
acceleration, focus was laid on making the optimisation of mobile devices a
little faster.
Google even mentioned that
the full release is yet to come as more things will be added to the bucket. At
present TF Lite is active for models like MobileNet, Inception v3 and Smart
Reply. Google also stated that
considering the needs of the developers, constrained platforms have been
started so as to ensure effective performance of most important common models.
Further they added, in future they have planned to prioritize functional
expansion depending on the needs and demands of the users. They will also be
working on the goal of simplifying the developers experience and also model
deployment for various mobiles and embedded devices.
Post Your Ad Here





Comments (1)