Our current work focuses on the two areas of
Low-power Neuro-inspired or Neuromorphic circuits and algorithms for Machine Learning
Low-power circuits and systems for Neural Interfacing. We are also motivated in commercializing these innovations.
With the trend of increasing data being collected from myriads of sensors in the age of Internet of Things, it is increasingly more important to find methods to deal with the data as early as possible. Machine Learning is therefore of prime interest since it allows us to extract information or patterns from the data which lead to insights and potential actions based on those insights. We take inspiration from one of the best known low-power pattern recognizers in the world--our brain! We can easily recognize a known face amidst tens of faces in a video--a task that is still difficult for current computer vision algorithms. More importantly, we do it at power levels that are orders of magnitude lower than our current GPU/CPU.
Specific projects that we are currently working on include: (click for more info)
Computation using Mismatch
Process variation induced mismatch between transistors is a major threat to low-voltage, low-power processing in deep sub-micron CMOS. This is a problem faced by neurons in our brain as well. Inspired by this, we are developing machine learning systems that can utilize this mismatch to perform effective computation at much lower power than their digital counterparts. For example, we have developed microwatt machine learners based on Extreme Learning Machine algorithm and used them for detcting patterns of biomedical signals (e.g. spike sorting, seizure detection etc) as well as classifying images (e.g. handwritten digits) or speech commands (e.g. spoken digits passed through Neuromorphic Cochlea). This work has potential to be applied to smart sensors or wearable devices or Internet of Things. These chips have also been used at the NSF sponsored Annual Neuromorphic Cognition Workshop at Telluride, Colorado.
Dendritic processing and Structural Plasticity
Most large scale cortical simulations as well as ANN models have ignored the role of dendrites and reduced them to linear summers. However, there is ample evidence in neuroscientific experiments pointing to the non-linear processing performed by dendrites in the last decade. Also, most models of learning in the AI/Neuroscience communities focus on changing weights. However, there is an alternate medium of learning--structural plasticity--by which connections are formed and eliminated in our brains through learning. We are developing machine learning systems that can utilize nonlinear dendrites (NLD) and strucutral plasticity with 1-bit synapses. Usage of low-resolution synapses helps in designing robust analog learning chips and reduces memory storage. Since the learning operates by changing connections, it can be exploited using address event representation techniques in neuromorphic VLSI implementations without additional overhead.
Learning Synapses with novel devices
The synapses in a neural network, both artificial and biological, outnumber the number of neurons by a factor of 100-1000. Hence, it is of utmost importance to make the corresponding circuits compact and low-power. The problem is compounded by the fact that these devices have to exhibit learning through modification of their strengths and have to store this strength in a non-volatile fashion. To achive this, we use flash or floating-gate memories as compact learning synapses and have demonstrated doublet and triplet STDP in these devices. We are also starting to explore usage of spintronic domain wall memories for low voltage learning synapses overcoming the high write votlage required for tunneling floating-gates.
Dynamical systems guided Neuromorphic Design
To reduce the footprint of neuronal circuits exhibiting bio-realistic dynamics, we use tools from dynamical systems theory--bifurcation analysis, phase response curves etc.-- to simplify the dfferential equations to be implemented in silicon. We have used this method to design the world's lowest power neuron exhibiting type I dynamics and the smallest central pattern generator for locomotion control.
Acquiring signals from the brain is extremely important for understanding brain function and providing potential cure for brain diseases and abnormal function(e.g. epilepsy, tremor). In general, interfacing with neurons can be broadly categorized in two classes: Extracellular and Intracellular Electrophysiology. In extracellular methods, we focus on neural signal recording from implants in the brain. These systems need to record uV level signals in the range of 1-100 Hz for LFP and 0.2-5 kHz for spikes from hundreds of electrodes in parallel while dissipating minimal power (to avoid tissue damage). This signal needs to be eventually digitized and transmitted off-chip wirelessly. We focus on novel, micropower signal acquisition and conditioning circuits/techniques to reduce the burden on the ADC and transmitter to increase the scalability of such systems to thoousands of channels in the future.
Some topics are: (click for more info)
Neural Ampifiers
We have designed digitally assisted neural amplifiers that exploit the statistics of neural signals to provide dynamic range beyond the power supply of the chip.
Spike Detector
We have designed current mode sub-uW(lowest reported so far) neural spike detectors by approximating the Nonlinear Energy Operator (NEO). Current mode design allows lowering of power supply and reusing derivatives of NEO for feature extraction required by spike sorting.
Spike Sorting
We have designed uW range machine learners based on Extreme Learning Machine (ELM) for supervised spike sorting that is similar to template matching.
Intention Decoding
We have used ELM based circuits to decode motor intention from spike trains recorded from the motor cortex. This was benchmarked with software simulation of decoding individuated finger movement of monkeys and was comparable. This sub-uW design is the world's first intention decoder with low enough power to be implanted. The chip also features an algorithm to improve performance when some neural channels lose information over time.