I pursued two approaches addressing both perception hardware and software.
Analog-to-Feature Extraction Embedded Sensor
Using Analog Compute-in-Memory (ACIM) for energy-efficient edge computation, I developed AIDEAL, an analog-to-learning system that extracts task-relevant features directly from analog inputs, reducing data digitization and transmission. This approach achieves up to 89.8% sensor energy savings in tasks like image reconstruction and object detection, with feature restoration and energy-aware quantization enhancing accuracy. AIDEAL has shown robustness against voltage and temperature variations. This work has been published in Sensors’23, with further developments under review in TCASAI’24. This work was funded by JUMP2.0 CogniSense.
Event Vision-based Multi-Agent System Prediction
I developed a method for predicting collective dynamics in multi-agent systems using event-based vision, which captures only brightness changes, offering an efficient alternative to traditional frame-based methods. Unlike conventional approaches reliant on precise localization, my method infers behaviors directly from visual observations using evMAP, a transformer-based model that captures real-time collective dynamics with lower computational load. This work establishes event vision as an effective standalone tool for understanding complex multi-agent interactions, outperforming frame-based methods. This work was funded by DARPA Eventformer, related to DARPA FENCE Program.