CAL18: Cognitive Agents that Learn in the Wild
To develop cognitive agents that communicate in and adapt to real-world social environments, learn on the fly, and demonstrate natural social intelligence in a vertically integrated manner.
- Employ machine learning to develop conversational agents that can attend to and use social cues as supervision for learning,
- Explore and develop algorithms for on-line, few-shot learning, zero-shot transfer and storage/retrieval of new memories while avoiding catastrophic forgetting;
- Demonstrate these ideas using the Intel Loihi research chip, a neuromorphic many-core processor with on-chip learning.
- Online learning dynamics that enable few-shot learning and storage/retrieval of new memories while avoiding catastrophic forgetting. Recent theoretical work and Intel Loihi chip provide support for these paradigms.
- A conversational agent embodied in a device enabled with speech recognition/production, capable of storing/retrieving new memories or adapting learned behaviors on the fly.
- Online social cue recognition and learning. A very simple ``COIL-style'' demonstration platform will be available for exploring novel online learning and associative memory algorithms
- A cognitive agent that models social feedback (interruptions, eye contact) to attend to robot-directed speech without keyword spotting, without responding to human directed speech.
- A self-steering robot that learns to model an environment while receiving online feedback from a human teacher
- Intel Loihi systems, each with as many as 500,000 to one million neurons and 64-128MB of aggregate synaptic memory. Along with an SDK supporting an API similar to familiar SNN frameworks (e.g. PyNN, BRIAN, Nengo, etc.)
- Battery operated single-chip Loihi systems suitable for wearable/mobile demonstrations.
- One large-scale system with significant embedded FPGA resources to support hardware customization during the workshop.
- Limited number of demonstration kits including all necessary hardware (see below), including a wide range of sensors: web/DVS/RealSense cameras, microphones, ultrasound, LIDAR, and accelerometers.
- Google AIY speech kits with raspberry pi for embodied speech understanding and production
- Using Loihi Chips
- RNNs for NLP and multimodal / multitask problems