Neuromorphic perception offers low-power and low-latency with sparse encoding and inherent information compression. Such perception for robots can potentially increase reactivity in an unconstrained and dynamic environment (i.e. avoid obstacles and grasp dynamic objects), improve mobility by reducing power requirements and reliance on off-board GPU servers, and integrate with biologically inspired artificial intelligence. In the visual domain, event-based sensors facilitate the accurate state estimation of fast-moving robots and objects by eliminating motion blur and data redundancy, thereby enhancing precision during actions. The high-dynamic range increases the robustness and reliability of robots in challenging lighting conditions. The reduction of data in a stationary environment, eliminates the need for computational resources and battery power, further enhancing the capabilities of this technology.
This workshop focuses on real-world applications and demonstrations of event-based cameras and neuromorphic sensors, including perception pipelines and data processing, that go beyond those solely tested in simulated environments. The potential of neuromorphic sensors in robotics has motivated a growing body of literature for solutions to robotic problems, but the amount of work that actually demonstrates the advantages of neuromorphic perception over traditional perception in real-world scenarios still needs to be shown. We encourage the participation and discussion of real-world robotic implementations, task-dependent applications, hybrid (neuromorphic and traditional sensors as well as comparisons between such sensors) systems, and online and real-time algorithms shown in real-world conditions. During the workshop we will discuss the current experimental trends, difficulties, and general solutions for achieving real-world neuromorphic perception for robots.