"Simplicity is the ultimate sophistication" 

—— Leonardo da Vinci

2023~2027 Broadening Participation Champion

COCOSYS aims to enable the next-generation of collaborative human-AI systems through synergistic advances in algorithms, hardware motifs, algorithm-hardware co-design and collective and collaborative intelligence. The center will demonstrate the impact of these advances in collaborative human-AI applications such as co-bots, future digital assistants, and mixed-reality systems. These applications are characterized by the need for real-time responses, extreme energy efficiency, explainability, and trustworthiness and will thus serve as ideal drivers for the proposed research. The innovations delivered by COCOSYS will advance the brittle and limited capabilities of current AI systems to approach the vision of digital humans augmenting biological humans as a trusted collaborator.

To pursue the overarching vision and goals stated above, the center will adopt a vertically integrated approach consisting of synergistic efforts in neural, symbolic, and probabilistic algorithms; algorithm-hardware co-design; technology-driven hardware motifs; and collective and collaborative intelligence. Theme 1 (Neural, symbolic, and probabilistic algorithms) will create the next generation of explainable algorithms, expand the scope of neuroinspired algorithms from perception to reasoning and decision making, and uncover the fundamental accuracy-robustness-efficiency tradeoffs in cognitive systems. Theme 2 (Hardware algorithm co-design) will distill the key computational characteristics of future cognitive workloads developed by Theme 1 and use them to drive the design of the next generation of programmable hardware architectures for cognitive computing. This theme will play a key role in ensuring that the developed algorithms are well-matched to the proposed hardware fabrics and vice-versa. Theme 3 (Technology-driven hardware motifs) will design the building blocks of future cognitive hardware platforms by matching the unique capabilities of various CMOS and beyond-CMOS devices and integration technologies to the needs of the workloads, seeking quantum improvements in energy efficiency and performance. Theme 4 (Collaborative intelligence) will specifically focus on the challenges involved in collections of AI agents and how AI agents interact with humans.

Our task in CoCoSys focus on enhancing the impact of educational outreach to the Greater Atlanta Area and Georgia. Specifically, we seek to pivot around the CoCoSys Center at Georgia Tech and promote both undergraduate and graduate education in computer engineering at regional universities and colleges, including Kennesaw State University (KSU), Morehouse College, etc. 

Visual processing tasks such as detection, tracking, and localization are essential to the automation of unmanned aerial vehicles (UAV), robots, surveillance, and defense systems.  However, these intelligent tasks become challenging on high-speed motion and edge devices due to limited computing resources and low power supplies. This research will explore a brain-inspired framework to process the visual information from two complementary visual sensors, event-based dynamical vision sensors (DVS) and frame-based standard cameras, in a sensor-fusion style. The overarching goal is to address the challenge of high-speed and energy-efficient visual processing with end-to-end closed-loop control on edge computing systems. The proposed research will benefit numerous robotics, surveillance, IoT security, and national defense applications. This work will also explore novel hybrid neural networks, thus contributing to the quest to general AI and enhancing the interdisciplinary collaboration between computer science and neuroscience. 

The proposed project will exploit the synergy of two brain-inspired learning models, neuromorphic spiking neural networks and regular deep neural networks. Such a hybrid neuromorphic framework can harness the high spatial resolution from a standard camera and the high temporal resolution from a DVS camera. The temporal encoded data from the DVS camera is suitable to be processed in a spiking neural network. In contrast, the data from the standard camera are compatible with traditional convolutional networks. This project will 1) design a hybrid neuromorphic framework composed of spiking neural networks and conventional artificial neural networks to process event-frame fused visual data; 2) adapt such a framework to UAV or robots and develop an end-to-end close-loop neuromorphic platform for various high-speed visual tasks. 3) explore the model compression of hybrid neural networks and architecture design of the hardware accelerator for the proposed framework.

My postdoctoral research at ICSRL, Georgia Tech are primarily funded by these two Joint University Microelectronics Program (JUMP) center of Semiconductor Research Corp. (SRC). 

C-BRIC, led by Prof. Kaushik Roy (Purdue ECE), is a five-year ten-university collaborative project supported by $27 million in funding from the SRC and DARPA. The mission of the Center for Brain-inspired Computing (C-BRIC) is to deliver key advances in cognitive computing, with the goal of enabling a new generation of autonomous intelligent systems such as self-flying drones and interactive personal robots.

ASCENT is a microelectronics research center funded by the SRC and  DARPA. The center's mission is to provide breakthrough advances in integrated nanoelectronics to sustain the promise of Moore’s Law. ASCENT is led by the University of Notre Dame (Prof. Suman Datta) along with 13 partner universities and 29 Principal Investigators.

My contribution includes a new dynamical model for a Ferroelectrical FET based circuits, which utilizes the hysteresis of the FeFET and a traditional MOSFET as a switch to charge and discharge a load capacitor, resulting in a periodic generation of voltage spikes. The proposed model captures the dynamical behavior of the FeFET neuron and models both the spike timing and spiking frequency. we demonstrate that the FeFET based spiking neuron with different excitatory and inhibitory inputs can imitate various spiking patterns of cortical neurons, similar to the Izhikevich’s model of cortical neurons. (For more detail, please see my IEEE EDL paper)

In the next step, I leverage the FeFET neuron model to explore the relationship between Swarm Intelligence (SI) and spiking neural networks (SNN). I believe they can be complementary and even benefit each other. To bridge these two distinct research fields, I proposed a new optimization solver composed of multiple spiking ferroelectric neural networks that resemble the behaviors of swarm intelligence. Such a computing platform can solve optimization problems like parameter optimization of continuous functions and NP-hard path planning problems (TSP) with high energy-efficiency. The innovation of this work is not limited to the combination of SI and SNN.  We use different neural dynamics to address different problems, the rate-based representation in the Fast Spiking mode for the optimization of the continuous objective function and the phase-based representation in the Regular Spiking mode for solving TSP. (Frontier in Neuroscience 2019)

This semester, I am designing a mixed-signal ASIC chip of Swarm Optimization.

Concept Art: FeFET neuron mimics various neuronal dynamics

Swarm v.s. Individual

SI-SNN architecture for parameter optimization of continuous functions

Example of Swarm Optimization on a continuous 2D function with multiple local minima

Neuromorphic Robot Team in ICSRL

The neuromorphic robot team include a postdoc (me), a PhD student and four undergraduate student (see the photo on the home page). The team started when Prof. Raychowdhury assigned me with students who was interested in doing research. At the beginning, the team was founded to explore the neuromorphic central pattern generator for gait control. Now it covers multiple research projects that emphasizes on end-to-end energy-efficient edge intelligence for robots (including legged robots and drones). We try to take advantage of the energy efficiency of event/spike based processing to design a neuromorphic systems that are capable of real-time online learning. 

For instance, Ashwin proposed a closed-loop online learning method of Spiking Central Pattern Generator (SCPG) for autonomous legged robots on edge computing platform. An SNN based algorithm enables a hexapod robot autonomously learn how to walk without supervisor and achieved high energy efficiency. The result is published in AICAS 2020 and IEEE JETCAS. Further, we train the CPG to generate multiple hexapod gaits allowing the transition between the gaits to execute specific tasks. We then incorporated SNN based visual processing for event data generated by the DVS to actuate the SCPG to achieve a nearest object tracking system. As far as we know, this is the first time to demonstrate the natural coupling of event data flow from DVS through SNN and neuromorphic locomotion system exploiting the energy advantage inherited in both of them. 

Kasey Cervantes is an undergraduate student from Emory University. He was recruited to the team when we were awarded as a 2020 Petit Scholar and Mentor respectively. The Petit Undergraduate Research Scholars program is a competitive fellowship program that serves to develop the next generation of leading bioengineering and bioscience researchers by providing a comprehensive research experience for a full year. Our proposal "Neuromorphic Intelligent Central Pattern Generator for Robots and Prostheses" was selected out of 180 candidates as one of 16 winners. We received 10k grants for student stipend and research expense. Kasey is currently working on the neuromorphic CPG and brain computer interface for prosthetics.

The remaining undergraduate members of our team work on their own topics as well. Justin Ting demonstrated the gait imitation between hexapod robots using event-based vision sensor and SNN, and publish his work in ICJNN 2020

Left: Illustration of the signal flow involved during locomotion in a human. Right: Overview of bioinspired CPG‐based robotic locomotion control.

(a) Closed loop locomotion system schematic. (b) Online learning of gaits. Black boxes indicate that the neuron spikes at that time instance.

Swarm of microrobots

Recently, our research team is collaborating with Dr. Azadeh Ansari’s team, which design and fabricate monolithic microrobots.  These robots are motivated by the resonation of a soft-bristle structure instead of motors. Thus, they can be fabricated through 3-D printing into a small size. The steering is controlled by the oscillation frequency of input signals. In this project, our goal is to achieve the collective intelligence of microrobots. Each microrobot is supposed to have an independent power supply, sensors and on-device intelligence. One challenge comes from the limited battery energy and complex onchip intelligent functions such as navigation, visual recognition and actuation control.  Another difficulty we have is to boost 3.7V lithium DC voltage into 20~25V square or sine wave AC on the same chip. We are trying to introduce brain-inspired algorithms and paradigms into the SoC design for such an intelligent system for energy efficiency. The microrobotic swarm aims at applications in searching, investigation, and analysis. 

Above: A tethered bristle robot smaller than 1cmx1cm. Right: Prototype of a quarter-size wireless bristle robot with MCU, voltage boost circuits and battery.  In the future, MCU and boost converter will be integrated into an ASIC chip stack with sensors and a solid-state battery. Such a system integration downscale the robot one step further and provide more intelligent functions.

Georgia Tech News on Dr. Ansari's robot.

“Sensing and Computing with Oscillating Chemical Reactions”

Sep 2016 ~ Present

In this NSF project,, we design a sensing and computing system by utilizing an oscillating chemical reaction called Belousov Zhabotinsky reaction. Our goal is to develop materials that compute by using non-linear, oscillating chemical reactions. We focus on polymer gels undergoing the oscillatory Belousov-Zhabotinsky (BZ) reaction. The novelty of our approach is in employing hybrid gel – piezoelectric MEMS to couple local chemo-mechanical oscillations over long distances through electrical connections. Our modeling revealed that: (1) the interaction between two such units is sufficiently strong to yield synchronization of the gels' oscillations; (2) the mode of synchronization is determined by the polarity of the connection; (3) each mode has a distinctive pattern of oscillations and generates voltage potential. The results indicate the feasibility of using the hybrid gel-piezoelectric micro-electro-mechanical systems (MEMS) for oscillator-based unconventional computing.

The devised BZ-PZ hybrid materials can sense, actuate and compute for pattern recognition without external power supply. 

Paper reference:  Science Advances 2016, Chemical Communication 2017, Chaos 2018, Journal of Applied Physics 2018

A Video shows Belousov Zhabotinsky reaction and Self-oscillating Gel.

The whole system is composed of multiple hybrid oscillator networks. The stored patterns are retrieved respectively through the synchronization process of each BZ-PZ network. The rate of synchronization (or convergence time) provides a distance metric between the input pattern and stored patterns, which can be used for Zero-Shot Learning.

The whole system is composed of multiple hybrid oscillator networks. The stored patterns are retrieved respectively through the synchronization process of each BZ-PZ network. The rate of synchronization (or convergence time) provides a distance metric between the input pattern and stored patterns, which can be used for Zero-Shot Learning.

This collaborative project aims at helping blind/vision-impaired people by developing AI systems with technologies of computer vision and machine learning. This wearable intelligent system will process the video data from the camera and perform image analysis, feature extraction, pattern recognition like human visual cortex. It will interact with users and provide valuable supportive information to help their daily activities. I am enthusiastic about this project very much since myself is partially vision-impaired . The prototype of this project was tested to help blind people in the scenario of shopping. 

Our group focuses on the hardware design of the image processing pipeline, especially for pattern matching. We are developing circuits, architectures, and algorithms around emerging nano-oscillators, such as spin torque oscillators (STO), Vanadium oxide oscillators. When we couple these oscillators together, their synchronization and desynchronization can perform pattern matching. Furthermore, these nano-oscillators can also be used to implement convolution operation and image segmentation. Currently, we focus on design computing paradigms for object recognition model, HMAX and Convolutional Neural Network (CNN). These models can be accelerated by employing the non-Boolean computing paradigm and nano-oscillators the with high operating frequency (GHz level for STO).

Object detection and localization in video analysis with the CNN accelerated with coupled oscillator computing model (120x Acceleration, 4.6x Energy Reduction)

“Ultra-Low Power Non-Boolean Systems”                 

2011 ~ 2013

This project was a feasibility study of building non-Boolean computing system with current emerging nano-device technology, funded by Intel Labs University Research Office. The motive of this project came from a series of report on emerging nano-devices by Intel Labs that try to address the scaling problem of CMOS technology. However, these novel devices failed to outperform traditional CMOS technology in the general Boolean logic computing system. Nonetheless, due to their nonlinearity and multiple stable states, we notice the potential of these new devices in unconventional computing system without using Boolean logic and the applications include pattern recognition, neural network, image processing and so on.

This short research project was successful as a proof of concepts and feasibility exploration, which lead to several successive research projects and grants. During this project, I finished my M.S. thesis, in which a tree structure hierarchical associative memory was proposed based on the nano-oscillator network. Also, we started our research on oscillators from then.