Program
08:00 - 08:05 [5min] Introduction
08:10 - 08:40 [30 minutes + 5 minutes Q/A] Keynote 1
Presenter: Trevor Bhil
Title:Toward Low-SWaP Cognitive Agents: Neuromorphic Intelligence and FPGA-Based Deployments of Event Neural Networks
Abstract: Traditional artificial intelligence (AI) solutions are often high-SWaP (Size, Weight, and Power), data hungry, and lack the efficiency, resilience, and autonomy exhibited by biological intelligence. As a consequence, autonomous and cognitive agents suffer from various technical bottlenecks precluding their widespread use. Neuromorphic computing offers the potential to develop learning machines that perceive, adapt, and learn continually. This paper surveys the state-of-the-art in neuromorphic intelligence, including advances in spiking neural networks (SNNs), emerging brain-inspired hardware platforms, supporting software stacks, and representative application domains. Key to these advances are deployment pipelines for development, evaluation, and testing of solutions. Thus, in parallel, we further present a preliminary FPGA-based deployment pipeline for both artificial neural networks (ANNs) and SNNs. This approach not only provides a practical low-SWaP alternative for edge AI but also enables head-to-head benchmarking of neuromorphic solutions against conventional neural models on reconfigurable hardware. Together, these perspectives define a roadmap toward robust, low-power cognitive systems capable of real-world autonomy across embedded and constrained environments.
08:40 - 08:50 [10 min] Spotlight session (2 papers)
DEIO: Deep Event Inertial Odometry
Weipeng Guan (The University of Hong Kong)
Fuling Lin (The University of Hong Kong)
Peiyu Chen (The University of Hong Kong)
Neural Ganglion Sensors: Learning Task-specific Event Cameras Inspired by the Neural Circuit of the Human Retina
Haley So (Stanford University)
Gordon Wetzstein (Stanford University)
08:50 - 09:25 [30 minutes + 5 minutes Q/A] Keynote 2
Presenter: Chiara Bartolozzi
Title: TBD
Abstract: TBD
09:25 - 09:35 [10 min] Spotlight session (2 papers)
Comparing Representations for Event Camera-based Visual Object Tracking
Oussama Abdul Hay (Khalifa University of Science and Technology, Abu Dhabi)
Sara Alansari (Khalifa University, Abu Dhabi)
Mohamad Alansari (Khalifa University, Abu Dhabi)
Yahya Zweiri (Khalifa University of Science and Technology, Abu Dhabi)
Lattice-allocated Real-time Line Segment Feature Detection and Tracking Using Only an Event-based Camera
Mikihiro Ikura (Istituto Italiano di Tecnologia)
Arren Glover (Istituto Italiano di Tecnologia)
Masayoshi Mizuno (Sony Interactive Entertainment Inc.)
Chiara Bartolozzi (Istituto Italiano di Tecnologia)
09:35 - 10:10 [30 minutes + 5 minutes Q/A] Keynote 3
Presenter: Walterio Mayol Cuevas
Title: On-Sensor Computer Vision With Pixel Processor Arrays, Current and Future Opportunities
Abstract: On-sensor Computer Vision offers opportunities that have seldom been available to artificial systems before. Low-latency visual computation, reduced power and space budget and low bandwidth requirements are at the centre of the challenges preventing efficient visual systems to scale and be deployed in the wild. If images do not need to be ferried around or reconstructed before understanding, multiple applications from robotics to IoT to all-day wearables will be unlocked. Parallel processor arrays (PPAs) are a new class of vision sensor devices that exploit advances in semiconductor technology, embedding a processor within each pixel of the image sensor array. Sensed pixel data are processed on the focal plane, and only a small amount of relevant and already processed information is transmitted out of the vision sensor. This tight integration of sensing, processing, and memory within a massively parallel computing architecture leads to an interesting trade-off between high performance, low latency, low power, low cost, and versatility in a machine vision system. In this talk, we will cover recent research that showcases a range of visual competences and applications achievable with on-sensor computation as well as introduce the challenges that a new research project between the Universities of Manchester, Bristol and Imperial College is aiming to tackle.
10:10 - 10:35 [20 minutes + 5 minutes Q/A] Rising Star Researcher Keynotes
Presenter: Friedhelm Hamann
Title: Data-scaling strategies in Event-based Vision
Abstract: Event cameras are promising visual sensors for tracking and observation tasks. However, leveraging the recent successes in deep learning is difficult because event data is not as readily available as image-based (RGB) data due to the limited adoption of event cameras. This talk presents different strategies to overcome data scarcity and discusses their advantages and disadvantages. Examples are the success of the recent synthetically trained method “Event-based Tracking of Any Point” (ETAP), and an action detection framework for behavior quantification using data of penguins acquired with an event camera in Antarctica.
10:35 - 11:55 [80min] Poster session + Coffee break
Enhancing Event-Based Optical Camera Communication Via Dynamic Timing Correction
Matthew Howard - University of Dayton, Keigo Hirakawa - University of Dayton
DEIO: Deep Event Inertial Odometry
Weipeng Guan - The University of Hong Kong, Fuling Lin - The University of Hong Kong, Peiyu Chen - The University of Hong Kong, Peng Lu - The University of Hong Kong
Neural Ganglion Sensors: Learning Task-specific Event Cameras Inspired by the Neural Circuit of the Human Retina
Haley So - Stanford University, Gordon Wetzstein - Stanford University
Comparing Representations for Event Camera-based Visual Object Tracking
Oussama Abdul Hay - Advanced Research and Innovation Center (ARIC), Khalifa University of Science and Technology, Abu Dhabi, UAE Sara Alansari - Department of Computer Science, Khalifa University, Abu Dhabi, UAE Mohamad Alansari - Department of Computer Science, Khalifa University, Abu Dhabi, UAE Yahya Zweiri - Advanced Research and Innovation Center (ARIC), Khalifa University, and Department of Aerospace Engineering, Khalifa University, Abu Dhabi, UAE
Quantifying Accuracy of an Event-Based Star Tracker via Earth’s Rotation
Dennis Melamed - Kitware, Connor Hashemi - Kitware, Scott McCloskey - Kitware
Lattice-allocated Real-time Line Segment Feature Detection and Tracking Using Only an Event-based Camera
Mikihiro Ikura - Istituto Italiano di Tecnologia, Arren Glover - Istituto Italiano di Tecnologia, Masayoshi Mizuno - Sony Interactive Entertainment Inc., Chiara Bartolozzi - Istituto Italiano di Tecnologia
Event-based Spinning Object SLAM
Ethan Elms - The University of Adelaide, Yasir Latif - The University of Adelaide, Tat-Jun Chin - The University of Adelaide
GraphEnet: Event-driven Human Pose Estimation with a Graph Neural Network
Gaurvi Goyal - Istituto Italiano di Tecnologia, Pham Cong Thuong - Istituto Italiano di Tecnologia, Arren Glover - Istituto Italiano di Tecnologia, Masayoshi Mizuno - Sony Interactive Entertainment Inc., Chiara Bartolozzi - Istituto Italiano di Tecnologia
SIS-Challenge: Event-based Spatio-temporal Instance Segmentation Challenge at the CVPR 2025 Event-based Vision Workshop
Friedhelm Hamann - TU Berlin, Emil Mededovic - RWTH Aachen University, Fabian Gülhan - RWTH Aachen University, Yuli Wu - RWTH Aachen University, Johannes Stegmaier - RWTH Aachen University
Multimodal Neuromorphic Event-Frame Fusion in Domain-Generalized Vision Transformer for Dynamic Object Tracking
Taha Razzaq - Tibbling Technologies, Asim Iqbal - Tibbling Technologies
Event-driven Robust Fitting on Neuromorphic Hardware
Tam Ngoc-Bang Nguyen - The University of Adelaide, Anh-Dzung Doan - The University of Adelaide, Zhipeng Cai - Intel Labs, Tat-Jun Chin - The University of Adelaide
Drone Detection with Event Cameras
Gabriele Magrini - Università degli studi di Firenze, Lorenzo Berlincioni - Università degli studi di Firenze, Federico Becattini - Università degli studi di Siena, Luca Cultrera - Università degli studi di Firenze, Pietro Pala - Università degli studi di Firenze
Toward Low-SWaP Cognitive Agents: Neuromorphic Intelligence and FPGA-Based Deployments of Event Neural Networks
Trevor Bihl - Ohio University, Rajashree Majumder - Ohio University, Zhewei Wang - Ohio University, Avinash Karanth - Ohio University, Jundong Liu - Ohio University
Exploring spatial-temporal dynamics in event-based facial microexpression analysis
Nicolas Mastropasqua - Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Computación, Ignacio Bugueno-Cordova - Institute of Engineering Sciences, University of O'Higgins. L3S Research Center, Leibniz University, Rodrigo Verschae - Institute of Engineering Sciences, University of O'Higgins, Daniel Acevedo - Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Computación, Pablo Negri - Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Computación
[Abstract Papers]
Spiking Transformer with Spatial-Temporal Attention
Donghyun Lee, Yuhang Li, Youngeun Kim, Shiting Xiao, Priyadarshini Panda
State-space Models for Sparse Geometric and Event Data
Mark Schöne, Karan Bania, Yash Bhisikar, Khaleelulla Khan Nazeer, Christian Mayr, Anand Subramoney, David Kappel
Matching Visual features on a Pixel Processor Array
Hongyi Zhang, Laurie Bose, Jianing Chen, Piotr Dudek, Walterio Mayol-Cuevas
S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks
Marco Apolinario, Kaushik Roy
Benchmarking Event-Based Object Detection in Lossy Environments
Ben Estell, Andrew Freeman
Motion Segmentation and Egomotion Estimation from Event-Based Normal Flow
Zhiyuan Hua, Dehao Yuan, Cornelia Fermuller
Beyond Domain Randomization: Event-Inspired Perception for Visually Robust Adversarial Imitation from Videos
Andrea Ramazzina, Vittorio Giammarino, Matteo El-Hariry, Dominik Scheuble, Mario Bijelic
Simultaneous Motion And Noise Estimation with Event Cameras
Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego
Visual Grounding from Event Cameras
Lingdong Kong, Dongyue Lu, Ao Liang, Rong Li, Yuhao Dong, Tianshuai Hu, Lai Xing Ng, Wei Tsang Ooi, Benoit R. Cottereau
11:55 - 12:00 [5 minutes] Conclusions + Best Paper Award