RobNIC26: Neuromorphic Robotics & Integrated Circuits
RobNIC26: Neuromorphic Robotics & Integrated Circuits
Ziyun (Claude) Wang | John Hopkins University
Christoffer Heckman | University of Colorado Boulder
Shantanu Chakrabartty | Washington University in St. Louis
Tobi Delbruck | INI / ETH Zurich
Andreas Andreou | Johns Hopkins University
Renaldas Zioma | Independent
Michael Furlong | Remote Collaboration Lead, UW / NRC Canada
This topic area introduces the Neuromorphic Robotics Engineering Challenge in Extreme Environments. Our central challenge is enabling sophisticated robotic systems to operate in hazardous, unstructured, and sensorally degraded environments where standard approaches fail.
Because of this we are bringing ROBOTS!
We’re calling on roboticists, sensor & signal-processing engineers, mapping and localisation folks, reasoning and autonomy builders, and hardware implementation specialists, anyone interested in how raw signals become perception, decisions, and real-world behaviour.
Modern solutions often suffer from sensory information overload. We posit that true autonomy requires guided perception, where sensing is not a passive data dump but an active, intelligent process aimed at building situational awareness. We focus on managing epistemic uncertainty: building agents that understand what they don't know and actively sense the world to find out.
Key Technical Pillars:
Vector Symbolic Algebras (VSA) & Semantic Pointer Architecture (SPA): We will move beyond simple sensor fusion to explore how high-dimensional vectors can "bind" sensory features to spatial representations, creating robust cognitive maps.
Understanding from Perception: Integrating context with raw sensing. We will explore 3D Gaussian Splatting (3DGS) as a mechanism for rapid, dense environmental modelling even in the presence of obscurants like smoke or dust.
Vision-Language-Action Models (VLAs): Investigating how State-of-the-Art VLAs can be distilled or integrated to provide high-level semantic reasoning (e.g., "Find the survivor") to low-level neuromorphic chips.
CoBots (Collaborative Robots): Robot2Robot and Human2Robot collaboration. Effective communication and knowleadge sharing and distilation.
Integrated Circuits & Hardware Acceleration: We explicitly encourage the integration of custom neuromorphic VLSI, FPGAs, or mixed-signal circuits. A key research avenue is demonstrating how dedicated silicon can accelerate the "sense-act" loop or reduce the power envelope for extreme edge deployment.
Our Core Questions:
How does an embodied agent's active sampling emerge from exploration in unseen scenarios?
What computational principles allow an agent to build a coherent internal state from noisy, multi-sensory feedback?
How can we bridge the gap between heavy semantic models (VLAMs) and agile, edge-based control?
Project 1: Understanding from Perception (Cognitive Maps & 3DGS)
Goal: Investigate the intersection of Cognitive Maps, SLAM, and 3D Gaussian Splatting (3DGS).
Challenge: "Understanding from perception" requires combining context with feature extraction. Teams will work on managing environmental uncertainty and dealing with visual obscurants to build robust 3D models.
Project 2: Vision-Language-Action Models (VLAs)
Goal: Explore how State-of-the-Art VLAs are pushing the boundaries of robotics.
Challenge: How can we bridge the gap between large-scale semantic reasoning (Language/Vision models) and real-time, embodied robotic action in dynamic environments?
Project 3: Robotic Awareness (Collaboration with SYNC)
Goal: A collaborative project with the Spatiotemporal Dynamics (SYNC) topic area.
Challenge: Implementing Peripersonal Space representations for robotic awareness. The focus is on understanding immediate surroundings for safety and interaction, utilising active sensing to define the robot's "protective bubble". No more getting crushed by your home assistant robot!
Project 4: [TBD]
Open for participant and mentor suggestions.
Project 5: [TBD]
Open for participant and mentor suggestions.
Project 6: [TBD]
Open for participant and mentor suggestions.
We provide a range of tools to ensure accessibility for all participants.
Platforms: Robotic Quadruped (Boston Dynamics Spot and Unitree Go2), Humanoid (Unitree G1), Rovers (Husky?), Quadcopters (?).
Sensors: Event-based cameras, Traditional cameras, RGBD, LiDAR, MEMS Audio, IMU, etc
Our schedule is designed to onboard all participants and build complexity over the three weeks.
Week 1 (Foundations): Invited talks on Robotics, Event-Driven Perception, VSAs. Tutorials on hardware setup.
Week 2 (Integration): Project work begins. Teams will build and integrate their systems, supported by code reviews.
Week 3 (Challenge): The final Extreme Challenge Showcase—deploying robots into the "unknown" environment.
Previous Projects
Keywords: