Machine Learning On The Edge
AI/MLEdge Computing

The Rise of Machine Learning On The Edge: AI Everywhere

0

Since its inception, the Internet of Things (IoT) has been revolutionizing our world. By now, we use countless microcontrollers and sensors for many different purposes which have increased demand for machine learning on the edge. Ranging from connected cars, precision agriculture, personalized fitness and wearables, smart housing, cities, healthcare, to predictive maintenance and many more.

Senior researchers from Microsoft identifies the dominant paradigm in these applications as “the IoT device is dumb”. The only thing it does is to sense its environment and transmit the sensor readings back to the cloud. And the cloud is where all the decision-making happens.

Data generation is growing at exponential speed.

According to IDC data collected in the whole world will grow to 175 zettabytes in 2025. It is not surprising that more than 50% of this data will be generated through IoT devices. This is incredible when you think about what great insights and consequently value can be generated from this data.

However, we are talking about the data that edge devices generate continuously. With traditional IoT approaches, we will have to transfer all of it to the cloud or local servers in order to train and run AI models.

Transit of this huge data from the edge to the cloud is not always desirable and poses several concerns around latency, scalability, connectivity, privacy, and security.

Machine Learning On The Edge

You may wonder how tiny, resource-constrained IoT devices can run machine learning locally without connecting to the cloud. In fact, edge devices have become much stronger when it comes to their computing power.

It makes it favorable to put AI models directly on the edge device for inference. This is generally regarded as “AI on the edge”. However, training of the model is still taking place on the cloud or enterprise data centers. See the image below by Gartner which describes all the options for integrating ML with IoT (option 1 represents ML on the edge):

Options for integrating ML with IoT

Options for integrating Machine Learning with IoT by Gartner

“AI at the Edge” is just gaining momentum.

“At its Ignite digital conference, Microsoft unveiled the public preview of Azure Percept, a platform of hardware and services that aims to simplify the ways in which customers can use Azure AI technologies on the edge – including taking advantage of Azure cloud offerings such as device management, AI model development and analytics.”

Microsoft claims the platform will simplify the process of developing, training, and deploying edge AI solutions. Currently, most successful edge AI projects today require full effort by multiple teams (engineers to design and build devices, plus data scientists to build and train AI models) to run on those devices.

In short, the demand for real-time solutions has increased enormously. This applies to both organizations and individual consumers. While deploying AI models on the edge for inference is gaining momentum, placing training models on the edge is still to follow as it is yet at the very early stage.

[ Read also: Azure ML: 4 Key Benefits For Organizations ]

[ Read also: Industrializing AI with MLOps: Can it live up to its promise? ]

TCN Media
The Cloud Navigator brings diverse perspectives on the world of emerging technologies powered by the cloud that affects every aspect of our lives.

6 Top-Rated Cloud Computing Books To Read In 2021

Previous article

What Is Quantum Computing Used For Today?

Next article

You may also like

Comments

Comments are closed.

More in AI/ML