COG20

Cognitive models for real-time systems

Topic Leaders

Invitees

  • (To Be Determined)

The purpose of this topic area is to explore real-time tasks where cognitive representations and processing within neuromorphic hardware are useful. In particular, we break this down into two related and complementary parts: natural language processing and adaptive decision making.

We propose to study neuromorphic and deep learning NLP by (1) incorporating brain inspired features into NLP models, and (2) implementing these models on neuromorphic and deep learning hardware. The world has advanced to the point where transcription models can convert speech to text and audio assistants use key words to initiate further speech processing. Deep learning models have now produced the state-of-art in many NLP tasks such as machine translation and machine transcription. Our intent is to use language to direct a cognitive model to extract necessary information for scene understanding or for communication. We will provide tutorials on the state-of-art language models and identify use cases that are small enough for study during the workshop. We will use the output of the model to direct the processing on the lower-level processing modules receiving sensory modalities.

For adaptive decision making, we will be focusing on neural models of decision making that can learn from their environment, based on high-level cognitive representations (such as those produced by an NLP system). In particular, many neural models of the basal ganglia and its reinforcement learning systems exist, but have only rarely been explored in complex domains with large real-time neural networks driving them. Modern neuromorphic hardware provides the opportunity to push these models into richer domains. Importantly, these high-level representations that are used by adaptive learning can be based in NLP, as opposed to the low-level sensory features typical of such research. Initial projects will start with foraging and escape behaviours, but by the end of the workshop we want to incorporate both adaptive decision making and natural language processing. This would lead to verbal control of robotic systems, where the system learns to improve its responses in real-time based on user feedback.

Projects

The projects will work with speech input and output in real time using hardware including laptops, smartphones and microcontrollers embedded in robots. Appropriate toolkits and modules will be assembled for students use including Tensor Flow and Intel Software. Possible projects range from application driven to more theoretical studies. Some examples include:

  • Robot control using language
  • Develop compact energy-efficient deep neuron network based NLP system recognizing keyword commands.
  • Implementation of models on neuromorphic hardware. Conversion to spiking network model studies.
  • Online reinforcement learning to improve the output of NLP systems
  • Driving NLP systems with robot sensor data to produce verbal descriptions

Provided Hardware and Software

  • Lego Mindstorms robotics kits
  • SpiNNaker and Braindrop neuromorphic chips
  • (More To Be Determined)

Participant Preparation

  • (To Be Determined)