Autonomous Experimental Robotics and Intelligent Systems (AERIS) Lab: https://justinbradley.unl.edu/
Multi-agent UAS Applications: Unmanned, multi-agent systems can multiply the effectiveness of systems and significantly increase their individual capabilities through cooperation. However, multi-agent systems also present new challenges as failures can compound, and small inefficiencies are magnified. The Autonomous Experimental Robotics and Intelligent Systems (AERIS) lab has been developing an outdoor-based multiagent UAS testbed of ~6-8 drones that can be used in a wide variety of multi-agent applications. The system is highly configurable, reliable, with an emphasis on enabling repeatability. Vehicles are identical, and Docker is used as a rapid deployment mechanism to ensure uniformity. REU students on this project will learn to improve upon our work in multi-agent control, decision-making, planning, or communication and test their improvements using the AERIS lab’s multi-agent testbed. Principles in centralized vs. decentralized algorithms will be the foundation and students will learn the hands-on skills to maintain, deploy, and conduct field experiments with multi-agent robotic systems.
Collaborative Robotics (CoRob) Lab: https://research.csc.ncsu.edu/corob/index.html
Building autonomous Triton robots from scratch: The Triton robot embodies exceptional capabilities for facilitating research in the domains of robot learning, perception, and autonomous navigation. Triton is a 3-wheeled omnidirectional robot powered by a micro controller board (Elegoo MEGA R3 Board ATmega 2560), a powerful computing board with GPU (NVIDIA Jetson Nano), multiple sensors (LiDAR, RGB-D camera), and Robot Operating System. In the CoRob Lab, we built 10 Triton robots from scratch and use them for both research and teaching. This project focuses on the Triton robot assembly, software integrating and re-design of the existing robot systems. We have comprehensive tutorials on the Triton robots to warm up the research process. The students and the mentors will collaboratively investigate the system upgrade, sensor calibration and autonomous functionality validation to make the existing robots more robust and reliable.
https://zguo32.wordpress.ncsu.edu/
Keeping ML Safe and Real-Time for Autonomous Racing Systems: Autonomous systems are becoming ubiquitous in our daily lives. Their timing correctness is critical to the correct functionality. The Robot Operating System (ROS) and its successor ROS2 were proposed for enhanced real-time support and performance. This project focuses on the development and verification of such systems to improve their abilities to support time-critical applications. Students will experience how the machine learning and AI-driven decision components are implemented on top of ROS or ROS2, investigate the computing resource allocation schemes, and perform system analysis that guarantees the timing correctness and end-to-end response time of each control chain. Meanwhile, the complex environment and car models make controlling a hard problem, and reinforcement learning-based control algorithms will be explored. The outcomes will be evaluated by both simulation and actual implementation of autonomous cars at 1:10 and/or 1:18 scale.
Low-Latency Edge-Assisted Cooperative Perception for Connected and Autonomous Vehicles: Connected and autonomous vehicles (CAVs) rely on Vehicle-to-Everything (V2X) communication to share information with surrounding vehicles and infrastructure, improving perception, safety, and traffic efficiency. Cooperative perception enables advanced capabilities such as coordinated driving, real-time map updates, and platooning, but its performance is often limited by communication bottlenecks, computation latency, network variability, and heterogeneous device capabilities. Edge-assisted architectures offer a promising solution, yet require efficient data processing to meet real-time constraints. This project investigates novel task-guided data and feature compression techniques to reduce semantic redundancy and optimize machine learning-based inference pipeline within cooperative perception systems, thereby enabling low-latency, scalable, and reliable edge-assisted CAV applications.
Synergistic Sensing and Computing System Lab: https://chenhanxu.github.io/
Bioacoustic Wearables for Hand Grip Strength Estimation: Hand grip strength (HGS) is a well-established biomarker of muscular, functional, and psychological health. Conventional measurement tools such as hydraulic or mechanical dynamometers provide accurate assessments but require bulky hardware, correct positioning, and direct exertion on external instruments, making them impractical for continuous or daily monitoring scenarios. Recent advances in bioacoustics, the study of mechanical vibrations produced by human tissues, suggest that muscle activity generates rich acoustic signatures that can be captured non-invasively. This project investigates a lightweight, non-invasive method for estimating hand grip strength using bioacoustic signals captured by a MEMS-microphone armband. Students will analyze vibration patterns from key forearm muscles and apply machine learning models (CNNs, LSTMs, U-Net hybrids) to classify and estimate grip strength. They will work with embedded hardware, generate multi-channel spectrograms, and evaluate model robustness under real-world noise and occlusion. The project offers hands-on experience in wearable sensing, signal processing, and AI-driven health monitoring.