DLN20

Better Together: Enhancing Deep Learning with Neuromorphic Innovations

Topic Leaders

Invitees

Deep Learning (DL) and Neuromorphic Computing (NC) are often perceived as competing instead of complementary technologies. However, the two different approaches are inherently synergistic, with each being best suited to solving different tasks or even subtasks within the same system. The goal of this topic area will be to combine conventional DL and NC approaches to achieve state-of-the-art demonstrations that overcome the limitations of each approach. This will involve quantitatively comparing the approaches (both algorithms and hardware) on different metrics and tasks to identify where each should be used.

The topic will focus on applications relevant to a real-world agent, such as navigation and processing time varying sensory signals. Participants will have access to the latest hardware systems, from conventional DL accelerators (Google Coral, NVIDIA Jetson, Intel Movidius) to more neuro-inspired DL architectures (GrAI One) and fully spiked-based architectures (Loihi).

Motivation

In an ANN performing convolution on an image, the required operations and memory are known exactly and can be optimized for implementation. Results can be stored retinotopically, providing an efficient dense representation in memory. However, data gets sparser in deeper ANN layers, and this is where NC excels. At some point, a sparse AER representation will require less memory and NC will benefit from skipping unnecessary computation. There is therefore room to combine DL and NC to improve performance even within a single frame-based ANN.

Real-world sensory signals are also typically sparse in time, thus opening another opportunity for optimization. Neuromorphic vision sensors exploit this sparsity, as do neuro inspired computing architectures such as GrAI One and Loihi. Real-world tasks such as visual tracking also benefit from maintaining the state of the world over time. This state can be maintained using a recurrent neural network on top of an ANN. Recurrent neural networks are well suited to NC architectures, but not even supported on many ANN edge devices, thus providing another opportunity to augment DL by including NC.

Real-world agents also benefit from multimodal sensing and SNNs provide a natural way to combine signals with different sampling rates. Recent results suggest that SNNs can efficiently implement associative memory, allowing the binding of features from multiple modalities such as odometry, sound, and vision. An associative memory can be used on top of ANN feature extractors to enable querying memory by features.

Finally, NC enables online learning, which is useful at the edge for tasks such as adaptive control, map formation, one-shot pattern memorization, and adapting to new sensory environments.

Projects

The projects will focus on tasks involving real-world sensory data and navigation for a mobile vehicle. Participants will try competing and hybrid approaches on a range of hardware. Benchmarking will be a priority as the best solutions are chosen for implementation on a real-world robot, “Sparky”. Sparky runs Ubuntu, uses standard Python and USB for all interfaces, can be put together with a screwdriver, and will come with all software and drivers pre-installed.

  • Sensory Processing Projects
    • Visual Recognition: Participants will learn SNN training and ANN to SNN conversion. They can explore post-training an SNN after conversion, and optimizing the hybrid system on metrics such as accuracy, inference time, and computation.
    • Online Learning: Loihi’s online learning capability can allow few shot learning of new objects in the final layers of a hybrid system, continuing Telluride 2019 work.
    • Detection: This builds on visual recognition, but adds a detection component where the location of the object must be estimated, or even segmented. ANN->SNN conversion techniques exist, but have not been applied to detection networks. An RNN running on Loihi can be used to smooth and improve tracking over time.
    • Speech classification: Speech is emerging as a promising area for deep SNNs running on low-power always-on edge devices. Participants will compare ANN, SNN, and hybridized approaches. The result can be used to issue verbal instructions to Sparky.
    • Anomaly detection: Time series-based anomaly detection is a general computation on dynamic data that has been broadly explored by ANNs, and should be amenable to NC.
  • Cognitive mapping:
    • Map formation: Hippocampus inspired models will be explored for representing the world, and building the representation online. Off the shelf vision algorithms using a standard camera will be provided for comparison to neuromorphic approaches. Infrastructure sensors (UWB) will provide location ground truth so the project can proceed without solving the geometric SLAM problem.
    • Route Finding: Given a map, methods for finding the shortest path between two points can be compared. Intel will come with map/graph search demonstrations and examples of benchmarking them running on Loihi.
    • Adaptive Control: Adaptive online control on Loihi has been demonstrated by ABR. It can be used to control Sparky’s speed as it carries different workloads, and DL can be explored for implementing the lower layer(s).
    • Representing relations: Features of map locations can be learnt online using an associative memory. Later the map and routes can be queried by feature (nearest coffee?). The Feature extraction will come from sensory processing. This work can use a spiking Threshold Phasor Associative Memory (TPAM) running on Loihi.

Provided Hardware and Software

  • Intel’s NxSDK and Loihi Systems, from 2 chip USB to 100+ chip remote access systems
  • Sparky, a rugged wheeled robot with onboard computer (Ubuntu), wireless interface, and USB interfaces for cameras, DAVIS, Loihi, and motor control.
  • Intel Movidius
  • Nengo
  • SLAYER (running on GCP)
  • DAVIS 240C sensors
  • Pushbots and omnibots
  • USB Frame cameras
  • UWB beacons/receivers
  • GrAI One
  • Google Coral

Participant Preparation

  • (To Be Determined)