Deep Learning (DL) and Neuromorphic Computing (NC) are often perceived as competing instead of complementary technologies. However, the two different approaches are inherently synergistic, with each being best suited to solving different tasks or even subtasks within the same system. The goal of this topic area will be to combine conventional DL and NC approaches to achieve state-of-the-art demonstrations that overcome the limitations of each approach. This will involve quantitatively comparing the approaches (both algorithms and hardware) on different metrics and tasks to identify where each should be used.
The topic will focus on applications relevant to a real-world agent, such as navigation and processing time varying sensory signals. Participants will have access to the latest hardware systems, from conventional DL accelerators (Google Coral, NVIDIA Jetson, Intel Movidius) to more neuro-inspired DL architectures (GrAI One) and fully spiked-based architectures (Loihi).
In an ANN performing convolution on an image, the required operations and memory are known exactly and can be optimized for implementation. Results can be stored retinotopically, providing an efficient dense representation in memory. However, data gets sparser in deeper ANN layers, and this is where NC excels. At some point, a sparse AER representation will require less memory and NC will benefit from skipping unnecessary computation. There is therefore room to combine DL and NC to improve performance even within a single frame-based ANN.
Real-world sensory signals are also typically sparse in time, thus opening another opportunity for optimization. Neuromorphic vision sensors exploit this sparsity, as do neuro inspired computing architectures such as GrAI One and Loihi. Real-world tasks such as visual tracking also benefit from maintaining the state of the world over time. This state can be maintained using a recurrent neural network on top of an ANN. Recurrent neural networks are well suited to NC architectures, but not even supported on many ANN edge devices, thus providing another opportunity to augment DL by including NC.
Real-world agents also benefit from multimodal sensing and SNNs provide a natural way to combine signals with different sampling rates. Recent results suggest that SNNs can efficiently implement associative memory, allowing the binding of features from multiple modalities such as odometry, sound, and vision. An associative memory can be used on top of ANN feature extractors to enable querying memory by features.
Finally, NC enables online learning, which is useful at the edge for tasks such as adaptive control, map formation, one-shot pattern memorization, and adapting to new sensory environments.
The projects will focus on tasks involving real-world sensory data and navigation for a mobile vehicle. Participants will try competing and hybrid approaches on a range of hardware. Benchmarking will be a priority as the best solutions are chosen for implementation on a real-world robot, “Sparky”. Sparky runs Ubuntu, uses standard Python and USB for all interfaces, can be put together with a screwdriver, and will come with all software and drivers pre-installed.