CAL18: Cognitive Agents that Learn in the Wild

Goals

To develop cognitive agents that communicate in and adapt to real-world social environments, learn on the fly, and demonstrate natural social intelligence in a vertically integrated manner.

  1. Employ machine learning to develop conversational agents that can attend to and use social cues as supervision for learning,
  2. Explore and develop algorithms for on-line, few-shot learning, zero-shot transfer and storage/retrieval of new memories while avoiding catastrophic forgetting;
  3. Demonstrate these ideas using the Intel Loihi research chip, a neuromorphic many-core processor with on-chip learning.

Organizers

Guido Zarrella, MITRE Corp and Emre Neftci, UC Irvine

Confirmed invited participants

  1. Mike Davies, Intel Corporation
  2. Hynek Hermansky, Johns Hopkins University
  3. Dhireesha Kudithipudi, Rochester Institute of Technology
  4. Yulia Sandamirskaya, University of Zurich
  5. Doo Seok Jeong, Hanyang University

Projects

  1. Online learning dynamics that enable few-shot learning and storage/retrieval of new memories while avoiding catastrophic forgetting. Recent theoretical work and Intel Loihi chip provide support for these paradigms.
  2. A conversational agent embodied in a device enabled with speech recognition/production, capable of storing/retrieving new memories or adapting learned behaviors on the fly.
  3. Online social cue recognition and learning. A very simple ``COIL-style'' demonstration platform will be available for exploring novel online learning and associative memory algorithms
  4. A cognitive agent that models social feedback (interruptions, eye contact) to attend to robot-directed speech without keyword spotting, without responding to human directed speech.
  5. A self-steering robot that learns to model an environment while receiving online feedback from a human teacher

Equipment

  1. Intel Loihi systems, each with as many as 500,000 to one million neurons and 64-128MB of aggregate synaptic memory. Along with an SDK supporting an API similar to familiar SNN frameworks (e.g. PyNN, BRIAN, Nengo, etc.)
  2. Battery operated single-chip Loihi systems suitable for wearable/mobile demonstrations.
  3. One large-scale system with significant embedded FPGA resources to support hardware customization during the workshop.
  4. Limited number of demonstration kits including all necessary hardware (see below), including a wide range of sensors: web/DVS/RealSense cameras, microphones, ultrasound, LIDAR, and accelerometers.
  5. Google AIY speech kits with raspberry pi for embodied speech understanding and production

Tutorials

  1. Using Loihi Chips
  2. RNNs for NLP and multimodal / multitask problems