Final Projects

TinyFedTL: Federated Transfer Learning on Tiny Devices

Authors: Kavya Kopparapu and Eric Lin

Abstract:

TinyML has risen to popularity in an era where data is everywhere. However, the data that is in most demand is subject to strict privacy and security guarantees. In addition, the deployment of TinyML hardware in the real world has significant memory and communication constraints that traditional ML fails to address. In light of these challenges, we present TinyFedTL, the first implementation of federated learning on a resource-constrained microcontroller. Our implementation of transfer-learning via fine-tuning a fully-connected layer does not have growing storage costs as the number of training examples increases.

Air Guitar: Using TinyML to Identify Complex Gestures

Authors: Robert Jomar Malate, Hossam Mabed, and Kathryn Wantlin

Abstract:

This project aims to explore and expand on how tiny machine learning (TinyML) can be used in the field of gesture recognition. We used the premise of playing an air guitar (someone pretending to play a guitar without actually holding or playing one) to be an application of this. We designed and developed the hardware, database, and software framework to mimic the motions of playing a guitar. From our model training evaluation and physical experimentation, we were able to successfully mimic the basic functions of a guitar.

Deer Detection for Highway Alert Systems at the Edge

Authors: John Alling and Dovran Amanov

Abstract:

Highway collisions between deer and vehicles cost hundreds of millions of dollars and account for two hundred fatalities in the United States annually. To combat this growing trend, we propose an active highway alert system to notify drivers when deer are present in the area. A deer detection machine learning model was trained, and by utilizing TinyML practices, was run on an edge-computing device for high accuracy, low SWaP, and fast response time to quickly and efficiently notify drivers. We were able to effectively detect deer in roadways in day or night conditions by using a dual-model approach.

Tiny Mosquito Detection Models for Deployment on a Low-Cost MCU

Authors: Nari Johnson, Rose Hong, and Joyce Tian

Abstract:

Low-cost solutions for mosquito detection would greatly aid in mosquito-airborne disease tracking, awareness, and prevention, particularly in resource-poor regions. Recently, neural networks have become an attractive architecture choice for such audio detection tasks due to their predictive power. In this work, we construct several lightweight models based on state-of-the-art audio recognition architectures. We introduce a novel TinyML pipeline to the mosquito detection literature in training these models both on mosquito wingbeat audio and in training adversarially against audio from the Speech Commands dataset to ensure model robustness to human speech. The resulting models have higher accuracy than prior literature with storage and computational costs low enough to be deployed on low-cost MCUs.

Snoring Detection with TinyML Deployed on Arduino Nano

Authors: Adriana Rotaru, Jiayu Yao, and Kelly Zhang

Abstract:

Snoring is related to a common medical condition that can lead to many serious health issues including diabetes, stroke, and depression. Due to the negative health impacts of such medical conditions, it is crucial for people to know whether they snore and understand their snoring patterns and triggers of snoring. In this paper, we propose a bed-side snoring detection program on microcontroller devices that automatically identifies snoring sounds. Our device extracts spectrograms from real time audio data and applies a Convolutional Neural Network (CNN) to classify whether an audio sample contains snoring sounds or not. In our work, we investigate different pre-processing methods, including Fast Fourier Transforms (FFT) and Mel-frequency cepstral coefficients (MFCC), as well as different neural network model architectures. We evaluate different approaches in terms of accuracy and model size. We deploy our models on a microcontroller with 1MB of Flash with minimal power consumption that can be run continuously. The best model we deploy has accuracy of 96.86 %, which is comparable to that of existing snoring detection models. However, our model is of size 18,712 bytes which is over 500 times smaller than other models in the literature (e.g. 9.8MB).

Earthquake Detection at the Edge

Author: Tim Clements

Abstract:

Earthquake detection is the critical first step in Earthquake Early Warning (EEW) systems. For robust EEW systems, detection accuracy, detection latency and sensor density are critical to providing real-time earthquake alerts. Traditional EEW systems use fixed sensor networks or, more recently, networks of mobile phones equipped with micro-electromechanical systems (MEMS) accelerometers. Internet of things (IoT) edge devices, with built-in machine learning (ML) capable microcontrollers, and always-on, always internet-connected, stationary MEMS accelerometers provide the opportunity to deploy ML-based earthquake detection and warning using a single-station approach at a global scale. Here, we test and evaluate deep learning ML algorithms for earthquake detection on Arduino Cortex M4 microcontrollers. We show the trade-offs between detection accuracy and latency on resource-constrained microcontrollers.

Tiny Terrain Classification from Body Dynamics of an Insect Scale Locomotor

Authors: Sherry Xie and Henry Cerbone

Abstract:

Terrain classification is of growing interest in the field of planetary rovers. While existing research uses custom robots equipped with a variety of sensors (e.g. force, torque, slip), having a fast, lightweight approach to identifying surface type would enable real-time gait adaptation for more efficient traversal. In this paper, we present a methodology and proof-of-concept for classifying terrains from the body dynamics of an insect-scale robot using tinyML. We provide a framework for platform, terrain, and model selection; collect IMU data from traversal of different terrain types; and train a convolutional neural network for deployment on a microcontroller device. Because terrain features are large relative to the size of the robot, interactions with the terrain are propagated through roll, pitch, and yaw. Thus, we achieve comparable accuracy to adjacent work using larger robots and more complex features spaces while meeting the resource constraints imposed by insect-scale systems. Additionally, we demonstrate real-time sensitivity to changes in terrain type with low latency.

Sequence-to-Sequence Models on Microcontrollersusing TFLite Micro

Author: Yuntian Deng

Abstract:

The recent breakthroughs in sequence generation stem in large part from the development of sequence-to-sequence (seq2seq) models. However, seq2seq models are mostly deployed on powerful servers, and there exist few solutions to deploying seq2seq models to more power-efficient microcontrollers (MCUs). In this work, we provide a solution to deploying seq2seq models on MCUs using TFLite Micro, which is already supported by many platforms. To verify the correctness of our implementation, we train a seq2seq model on a synthetic numbers-to-words conversion task, and deploy it to Arduino Nano 33 BLE with 100% test accuracy.