MITB Banner

How TensorFlow Lite Fits In The TinyML Ecosystem

Share

TensorFlow Lite has emerged as a popular platform for running machine learning models on the edge. A microcontroller is a tiny low-cost device to perform the specific tasks of embedded systems.

In a workshop held as part of Google I/O, TensorFlow founding member Pete Warden delved deep into the potential use cases of TensorFlow Lite for microcontrollers.

Further, quoting the definition of TinyML from a blog, he said: 

“Tiny machine learning is capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of ways-on-use-case and targeting battery operated devices.” 

A Venn diagram of TinyML showcasing the composition of TinyML (Source: Google I/O) 

How is TinyML different? 

Most machine learning applications are resource-intensive, and expensive to deploy and maintain.

According to PH Data, $65K (INR 47 lakhs) is the bare minimum amount required to deploy and maintain a model over 5 years. As you build a scalable framework to support future modeling activities, the cost might escalate to $95K (INR 70 lakhs) over five years.

On the other hand, TinyML is quite flexible and simple and requires less power.

Each hardware component (mW and below) acts independently, and the storage capacity of most machine learning models barely exceeds 30kb. Also, data can be processed on the device locally, which inevitably reduces data latency and solves the data privacy issue. Arm, Arduino, Sparkfun, adafruit, Raspberry Pi, etc are the major players in TinyML.

TensorFlow Lite, an open-source library by Google, helps in designing and running tiny machine learning (TinyML) models across a wide range of low-power hardware devices, and does not require much coding or machine learning expertise, said Warden.

Benefits of TinyML:  

  • Really small form factors enable multiple use cases 
  • Cheaper devices make ML more accessible 
  • Low battery consumption means devices can run for much longer 
  • Data processing can be done on the device (no cloud connection required) 

How does TinyML work? 

The TinyML process works in four simple steps — gather/collect data, design and train the model, quantise the model and deploy to the microcontroller. 

Source: Google I/O

In a blog post, ‘TensorFlow Lite for Microcontrollers,’ Google has explained some of its latest projects that combine Arduino and TensorFlow to create useful tools: 

To initiate the project, you need a TF4 Micro Motion Kit pre-installed on Arduino. Once you have installed the packages and libraries on your laptop or personalised computer, look for the red, green and blue flashing LED in the middle of the board. The details of the setup are found here

Once the setup is complete, you need to connect the device via Bluetooth; the TF4Micro motion kit communicates with this website via BLE, giving you a wireless experience. Now, tap the button on your Arduino, then wait for the red, green, and blue LED pattern to return. After this, click the ‘connect’ button as shown on the website, then select TF4Micro Motion Kit from the dialogue box. You are now good to go. Similar steps need to be followed for all three experiments — Air snare, FUI and tiny motion trainer

Note: Do not hold the button down as this will clear the board. 

The above experiments will help you get a hang of TensorFlow Lite on microcontrollers. You can also submit your ideas to the TensorFlow Microcontroller Challenge and win exciting cash prizes. 

As part of a TensorFlow Microcontroller Challenge, Sparkfun is giving out a free TensorFlow Lite for Microcontrollers Kit. Click here to get yours. 

PS: The story was written using a keyboard.
Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed