Machine Learning Meets Internet of Things: From Theory to Practice

Tutorial at ECML PKDD 2021 - 17 Sept @ 9 am - 12:30 pm CEST

Overview

Standalone execution of problem-solving Artificial Intelligence (AI) on IoT devices produces a higher level of autonomy and privacy. This is because the sensitive user data collected by the devices need not be transmitted to the cloud for inference. The chipsets used to design IoT devices are resource-constrained due to their limited memory footprint, fewer computation cores, and low clock speeds. These limitations constrain one from deploying and executing complex problem-solving AI (usually an ML model) on IoT devices. Since there is a high potential for building intelligent IoT devices, in this tutorial, we teach researchers and developers; (i) How to deep compress CNNs and efficiently deploy on resource-constrained devices; (ii) How to efficiently port and execute ranking, regression, and classification problems solving ML classifiers on IoT devices; (iii) How to create ML-based self-learning devices that can locally re-train themselves on-the-fly using the unseen real-world data.

Aims and Learning Objectives

Through this tutorial, we aim to interconnect the Software Engineering, Internet of Things, Machine Learning communities by bringing together the technology from each community in order to develop AI-enabled, self-learning, and offline inference performing autonomous IoT devices/products. The learning objectives of the tutorial are the following:

  • For beginners, it will create an end-to-end understanding of how to optimize a given problem-solving ML model and deploy it on resource-constrained devices for offline analytics.

  • Practitioners can improve the inference performance and compression levels of their use-case ML models, which they plan to deploy on their commercial IoT devices/products.

  • Researchers, when benchmarking a ML model by executing it on real-world devices using the tutorial Part IV technique, can obtain superior experimental results in their papers.

  • For ML experts, it will express the need of designing resource-friendly models in order to speed up the R & D phase (going from idea to product) of ML-powered IoT devices.

Tutorial Material

We will deliver the concepts using Powerpoint slides embedded with animations and small code snippets. During content delivery, the audience will be asked to perform small and quick exercises to make the tutorial interactive. We also will interleave live/recorded demonstrations throughout the tutorial to improve the audience's understanding and also to provide them opportunities to study technology in action. The 3.5 hours incl. one 30 minute break tutorial is comprised of the below four parts with hands-on exercises relevant to each session.

Part I: ML for IoT Devices

Duration: 30 mins. 25 mins of slides, and 5 mins Q&A.

Content: The audience would be exposed to the following ML for IoT tools and hardware:

  • TensorFlow Lite for Microcontrollers [Link] and TFHub [Link].

  • Netron NN model visualizer [Link].

  • Arduino IDE with necessary ML libraries.

  • STM32f103c8 Blue Pill: ARM Cortex-M0 @72MHz, 128KB Flash, 20KB SRAM [Link].

  • Generic ESP32: Xtensa LX6 @240MHz, 4MB Flash, 520KB SRAM [Link].

Outcome: The audience will understand the basics that can be leveraged throughout the tutorial.

Presentation Slides: [Part I - 30 min].

Part II: Creating ML-based Self-learning IoT Devices

Duration: 50 mins. 20 mins of slides, 20 mins live demo, and 10 mins Q&A.

Content: we briefly present and demonstrate the following frameworks:

  • Edge2Train to enable onboard resource-friendly training of SVM models on MCUs [Repo] [Paper].

  • Train++ for ultra-fast incremental onboard classifier training and inference on MCUs [Repo] [Paper].

  • ML-MCU to train up to 50 class ML classifiers on a $ 3 ESP32 board [Repo] [Paper].

Outcome: The audience would have learned how to make their IoT devices/products self-learn/train on-the-fly, using live IoT use-case data. Thus, their devices can self-learn to perform analytics for any target IoT use cases.

Presentation Slides: [Part II - 50 min].

--Break 30 mins--

Part III: Deep Optimizations of CNNs and Efficient Deployment on IoT Devices

Duration: 50 mins. 20 mins of slides, 20 mins live demo, and 10 mins Q&A.

Content: We briefly present and demo how to apply the following TensorFlow based optimizers on CNNs

  • Pre-training Optimization: Quantization-aware training, Pruning.

  • Post-training Optimization: Int with float fallback, Float16, Integer-only quantization.

  • Operations optimization.

  • Graph Optimization.

Then we demo joint optimization by combining more than one of the above optimizers. Based on experiment result analysis, we present the best optimization sequence for smallest model size, accuracy preservation, and fast Inference [Repo] [Paper] [Paper under review].

Outcome: The audience can apply the learned optimization techniques on the models from a growing number of use-cases such as anomaly detection, predictive maintenance, robotics, voice recognition, machine vision, etc., to enable standalone device-level execution. Thus, we believe this part of the tutorial session opens future avenues for a broad-spectrum of applied research works.

Presentation Slides: [Part III - 50 min].

Part IV: Efficient Execution of ML Classifiers on IoT Devices

Duration: 30 mins. 10 mins of slides, 15 mins live demo, and 5 mins Q&A.

Content: Brief introduction on how Decision Trees (DT) and Random Forest (RF) classifiers can be used in an IoT setting to solve ranking, regression, and classification problems locally at the device level. Then we demo how to efficiently port and execute DT and RF classifier models on MCU boards [Repo] [Paper] [Paper]. The following tools are covered:

Outcome: The audience can use the explained generic end-to-end method to quickly port and execute various datasets trained ML algorithms like DTs, RFs, SVMs, LGBM, XGB, AdaGrad, LogisticRegressionCV, etc. on any of the resource-constrained MCU-based devices of their choice/availability.

Presentation Slides: [Part IV - 30 min].

Requirements

Prior knowledge: Since this tutorial makes use of concepts from both ML and IoT, the ideal preparation would be the basics of Arduino IDE, MCU boards, and basic ML models. Although our step-by-step tutorial will guide the audience on how to implement the tutorial covered technologies, the knowledge of programming languages such as Python, C/C++, and the set up of the Google Colab/Jupyter notebook would be beneficial for the hands-on and demo session.

Technical Requirements: Participants should install Arduino IDE in their laptop and download the entire ECML-Tutorial-ML-Meets-IoT tutorial GitHub repository (only a few MB in total).

Tutorial Organizers

ML-based IoT Applications to Explore

Avoid Touching Your Face: Covid-away Dataset and Models for Smartwatches

OWSNet: Offensive Words Spotting Network for IoT Devices

Alexa with Biometric Authentication, Custom Skills, and Advanced Voice Interaction Capability

Air Quality Sensor Network Data Acquisition, Cleaning, Visualization, and Analytics: A Real-world IoT Use Case

Edge2Guard: Botnet Attacks Detecting Offline Models for IoT Devices

TinyML Benchmark: Executing Fully Connected NNs on MCUs

Questions, Collaboration, and Feedback

Contact Bharath Sudharsan

Email: bharathsudharsan023@gmail.com | Ph: +353-899836498

Acknowledgement

My deepest appreciation to Prof. John Breslin (NUI Galway), Dr. Muhammad Intizar Ali (Dublin City University), and Pankesh Patel (University of South Carolina) for their extensive knowledge share and helpful advice.