Final Projects

WILDFIRE SMOKE DETECTION VIA YOLOV7 FOR EMBEDDED DEVICES  

Authors: Divya Amirtharaj, Henry Bae, Eliza Kimball, and Alex Rodriguez 

Abstract:

To maximize control and management over wildfires, early detection is critical for prompt response and mitigation. Leveraging the principles of tiny machine learning (TinyML), this paper presents a refined smoke cloud detection model with a focus on reducing inference time for use on a drone to aid in early wildfire detection. We utilized the You Only Look Once (YOLOv7) object detection model due to its efficient architecture, and further optimized it for real-time inference. In our research process, we surveyed a wide range of techniques including GUIs like Roboflow and built in CV models, custom pipelines using Tensorflow’s Model Zoo, and ultimately a PyTorch implementation of YOLOv8. Each process had its own set of compatibility and deployment issues, but ultimately we were able to create a quantized YOLOv7 model. Our goal with this project is to propose a TinyML solution for continuous, real-time monitoring for early-stage wildfire detection that balances high accuracy with a lightweight footprint. 

CACHE 4 TRASH 

Authors: Sophia Cho, Jared Ni, and Aditi Raju

Abstract:

Recycling contamination poses a significant environmental challenge, with up to 25% of recyclable waste being contaminated, making it non-recyclable. Current recycling mitigation efforts primarily rely on recycling education, but these approaches rely on extra knowledge and effort from individuals and often result in continued mistakes. In response, we present Cache 4 Trash, a TinyML-embedded automatic waste management system that simplifies waste sorting for users. Utilizing a Nicla Vision inference loop and a motor, Cache 4 Trash employs a TinyML model to classify waste as recyclable or non-recyclable in real-time. The system addresses class imbalances in data, considers edge cases, and navigates the trade-off between accuracy and performance when deployed on tiny chips, leading to a classification of accuracy between 70% and 80%. Our approach enhances both the accuracy and convenience of waste management, offering a promising solution to recycling contamination challenges. 

ahmedzergham_LATE_200778_16471849_nerf project-1.mp4

TinyNeRF

Authors: He-Yen Hsieh, Zergham Ahmed and Xin Dong

Abstract:

Neural Radiance Fields (NeRF) is a method for creating novel views of scenes. Compressing the NeRF model to deploy it on resource constrained devices is interesting for several applications including extended reality (XR), interactive photography, and data augmentation. We explore several quantization and pruning schemes and report the results. We found that we are able to reduce the model size of NeRF by 87.5% through quantization and by 91.2% through a combination of both quantization and pruning while maintaining rendered image quality. NeRF is able to achieve good performance with less bits, network size and thus memory consumption.

orhan_Nestor_Matheus_Daniel_cs249_final_video.mp4

TinyRL for Quadruped Locomotion using Decision Transformers

Authors: Orhan Eren Akgun, Nestor Cuevas, Matheus Farias and Daniel Garces

Abstract:

Resource-constrained robotic platforms are particularly useful for tasks that require low-cost hardware alternatives due to the risk of losing the robot, like in search-and-rescue applications, or the need for a large number of devices, like in swarm robotics. For this reason, it is crucial to find mechanisms for adapting reinforcement learning techniques to the constraints imposed by lower computational power and smaller memory capacities of these ultra low-cost robotic platforms. We try to address this need by proposing a method for making imitation learning deployable onto resource-constrained robotic platforms. Here we cast the imitation learning problem as a conditional sequence modeling task and we train a decision transformer using expert demonstrations augmented with a custom reward. Then, we compress the resulting generative model using software optimization schemes, including quantization and pruning. We test our method in simulation using Isaac Gym, a realistic physics simulation environment designed for reinforcement learning. We empirically demonstrate that our method achieves natural looking gaits for Bittle, a resource-constrained quadruped robot. We also run multiple simulations to show the effects of pruning and quantization on the performance of the model. Our results show that quantization (down to 4 bits) and pruning reduce model size by around 30% while maintaining a competitive reward, making the model deployable in a resource-constrained system.

yinglance_455477_16461321_249 video-1.mp4

Pain Detection with physiological signals on Microcontrollers

Authors: Emma Chen and Lance Ying

Abstract:

Pain assessment often comes from patients’ self-report, which depends on effective communication between doctors and patients that may not always happen and can lead to suboptimal treatment plans. Machine learning (ML) algorithms have been explored to make objective pain assessments using physiological signals such as electrodermal activity (EDA) and electrocardiogram (ECG). Although it is not uncommon to collect these signals from wearables and run the predictive algorithms on the Cloud or a smartphone, this process relies on internet connectivity and contains the risk of patient information leakage. In comparison, running the algorithms ondevice can avoid data transmission for privacy-preserving and offline inference. In this study, we compare the performance and memory usage of different ML algorithms for pain assessment with EDA and ECG. We then demonstrate the feasibility of running a predictive algorithm with reasonable accuracy on a micro-controller.

uchenduike_450664_16471485_RLPerf Video.mp4

RLPerf: Benchmark for Autonomous Agents

Authors: Jason Jabbour and Ikechukwu Uchendu

Abstract:

Recent advances in artificial intelligence (AI) have been greatly fueled by benchmarks. While machine learning methods such as computer vision and natural language processing have seen extensive real-world adoption, reinforcement learning (RL) has not. We posit that this disparity in adoption is fueled by a lack of benchmarks for real-world use cases of autonomous agents. In this paper, we introduce RLPerf, a benchmark for autonomous agents that reflects real-world problems. We discuss the challenges associated with benchmarking real use cases of autonomous agents, and propose a new set of metrics to help accelerate the adoption of RL agents in production systems.

alvarezjonathan_267651_16449448_alvarez_tinyML_video.mp4

Unilateral Gait Classification by Measuring Local Muscle Deformation

Author: Jonathan Alvarez

Abstract:

Clinicians and researchers working across diagnosis, rehabilitation, and wearable lower-limb robotics all rely on accurate gait analysis tools and algorithms for assisting patients and end users. Accurate segmentation of events in the gait cycle is an important subset in the tool kit of gait analysis and involves partitioning the various phases of the gait cycle (e.g. initiation of the swing phase (toe off) or commencement of stance phase (heel strike)). Traditional approaches such as optical motion capture or instrumented treadmills serve as accurate ground-truths for gait segmentation, but are limited to lab-based analysis. Unsurprisingly, there has a been a lot of research focused on adapting existing, or developing novel, wearable sensing approaches for accurate gait segmentation outside of the lab and into the community. Inertial measurement units are the most common wearable sensor used for gait analysis, but often require several sensors, one of which needs to be mounted distally on the foot. This work leverages recent advances in soft strain sensor technology to locally measure muscle deformation of the lower-limb muscles to detect toe off and heel strike gait events during healthy treadmill walking. A data-driven tiny machine learning approach was created to detect gait events from the soft strain sensors, optimized for deployment on a resource-constrained microcontroller. A sensor-trained model performs as well as an IMU-based model (sensor-model accuracy = 98.21% vs. IMU-model accuracy = 98.21%) in detecting toe off and heel strike gait events. Furthermore, the model was deployed on an Arduino Nano 33 BLE Sense for a real-time inference demonstration.