The workshop will take place in the Panorama Lounge room
14:00 - 14:05 [5 min] Introduction
14:05 - 14:35 [30 min] Keynote 1
Presenter: Guillermo Gallego [SLIDES]
Title: Event cameras: motion estimation and animal behavior quantification
Abstract: Event cameras are novel vision sensors that mimic functions from the human retina and offer potential advantages over traditional cameras to tackle challenging problems, such as those involving high-speed motion, high dynamic range illumination and power-constrained scenarios. This talk will present recent advances from the Robotic Interactive Perception Lab at TU Berlin on processing event data in topics of interest (ego-motion estimation (SLAM), optical flow estimation, 3D reconstruction, animal monitoring, and schlieren imaging).
14:35 - 15:05 [30 min] Spotlight session (6 papers)
Wenhao Xu (University of Science and Technology of China)
Wenming Weng (University of Science and Technology of China)
Yueyi Zhang (University of Science and Technology of China)
Zhiwei Xiong (University of Science and Technology of China)
Weiqi Luo (Wuhan University)
Chi Zhang (Wuhan University)
Lei Yu (Wuhan University)
Valery Vishnevskiy (ETH Zurich)
Greg D Burman (Sony Europe Ltd)
Sebastian Kozerke (ETH Zurich)
Diederik Paul Moeys (Sony)
Carlos Plou (Universidad de Zaragoza)
Nerea Gallego (University of Zaragoza )
Alberto Sabater (Universidad de Zaragoza)
Eduardo Montijano (University of Zaragoza)
Pablo Urcola (Bitbrain)
Luis Montesano (University of Zaragoza; Bitbrain)
Ruben Martinez-Cantin (University of Zaragoza)
Ana C Murillo (Universidad de Zaragoza)
Suman Ghosh (TU Berlin)
Valentina Cavinato (SONY)
Guillermo Gallego (TU Berlin)
Jayasingam Adhuran (Kingston University)
Nabeel Khan (University of Birmingham Dubai (UoBD))
Maria Martini (Kingston University, UK)
15:05 - 15:35 [30 min] Keynote 2
Presenter: Stanislaw Wozniak [SLIDES]
Title: Heading towards comprehensive neuromorphic vision
Abstract: Neuromorphic computing takes inspiration from biology to develop efficient, adaptive and intelligent next-generation computing systems. In vision, mimicking eye’s retina by neuromorphic event-based sensors has demonstrated capabilities unmatched in conventional cameras. In this talk, we will discuss the potential of including further inspiration from the eye: its movements and temporal information coding. We will demonstrate how this improves computational capabilities, efficiency and accuracy, leading towards a powerful comprehensive neuromorphic vision approach.
15:35 - 16:40 [55 min] Poster session + Coffee break
Wenhao Xu (University of Science and Technology of China)
Wenming Weng (University of Science and Technology of China)
Yueyi Zhang (University of Science and Technology of China)
Zhiwei Xiong (University of Science and Technology of China)
Weiqi Luo (Wuhan University)
Chi Zhang (Wuhan University)
Lei Yu (Wuhan University)
Valery Vishnevskiy (ETH Zurich)
Greg D Burman (Sony Europe Ltd)
Sebastian Kozerke (ETH Zurich)
Diederik Paul Moeys (Sony)
Carlos Plou (Universidad de Zaragoza)
Nerea Gallego (University of Zaragoza )
Alberto Sabater (Universidad de Zaragoza)
Eduardo Montijano (University of Zaragoza)
Pablo Urcola (Bitbrain)
Luis Montesano (University of Zaragoza; Bitbrain)
Ruben Martinez-Cantin (University of Zaragoza)
Ana C Murillo (Universidad de Zaragoza)
Suman Ghosh (TU Berlin)
Valentina Cavinato (SONY)
Guillermo Gallego (TU Berlin)
Jayasingam Adhuran (Kingston University)
Nabeel Khan (University of Birmingham Dubai (UoBD))
Maria Martini (Kingston University, UK)
Drone Detection Using a Low-Power Neuromorphic Virtual Tripwire
Anton Eldeborg Lundin (Swedish Defence Research Agency)
Rasmus Winzell (Swedish Defence Research Agency)
Hanna H Hamrell (Swedish Defence Research Agency)
David Dr Gustafsson (Swedish Defence Research Agency (FOI))
Hannes Ovren (Swedish Defence Research Agency)
Event-Stream Super Resolution using Sigma Delta Neural Network
Waseem Shariff (University of Galway)
Joseph Lemley (Tobii, Galway, Ireland)
Peter M Corcoran (National University of Ireland, Galway)
Tracking-Assisted Object Detection with Event Cameras
Ting-Kang Yen (National Taiwan University)
Igor Morawski (National Taiwan University)
Shusil Dangi (Qualcomm Inc.)
Kai He (Qualcomm Technologies, Inc.)
Chung-Yi Lin (National Taiwan University)
Jia-Fong Yeh (National Taiwan University)
Hung-Ting Su (National Taiwan University)
Winston H. Hsu (National Taiwan University)
MouseSIS: A Frames-and-Events Dataset for Space-Time Instance Segmentation of Mice
Friedhelm Hamann (Technical University Berlin)
Hanxiong Li (TU Berlin)
Paul Mieske (Freie Universität Berlin)
Lars Lewejohann (SCIoI, FU-Berlin, BfR Berlin)
Guillermo Gallego (TU Berlin)
HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision
Burak Ercan (Hacettepe University)
Onur Eker (Hacettepe University)
Aykut Erdem (Koc University)
Erkut Erdem (Hacettepe University)
Millisecond-latency Visual Fault-buttons using Event-cameras
Stefano Chiavazza (Istituto Italiano di Tecnologia)
Chiara Bartolozzi (Istituto Italiano di Tecnologia )
Arren Glover (Istituto Italiano di Tecnologia)
Neuromorphic Facial Analysis with Cross-Modal Supervision
Federico Becattini (University of Siena)
Luca Cultrera (University of Florence)
Lorenzo Berlincioni (University of Florence)
Claudio Ferrari (University of Parma)
Andrea Leonardo (University of Florence)
Alberto Del Bimbo (University of Florence)
Evaluating Image-Based Face and Eye Tracking with Event Cameras
Khadija Iddrisu (Dublin City University)
Waseem Shariff (University of Galway)
Noel O Connor (Home)
Joseph Lemley (Tobii, Galway, Ireland)
Suzanne Little (Dublin City University, Ireland)
Scaling Up Resonate-and-Fire Networks for Fast Deep Learning
Thomas Huber (fortiss)
Jules Lecomte (fortiss GmbH)
Axel von Arnim (fortiss GmbH)
Borislav Polovnikov (Ludwig-Maximilians-Univeristät)
Neuromorphic Drone Detection: an Event-RGB Multimodal Approach
Gabriele Magrini (Università degli studi di Firenze)
Federico Becattini (University of Siena)
Pietro Pala (University of Florence)
Alberto Del Bimbo (University of Florence)
Antonio Porta (Leonardo)
Pushing the boundaries of event subsampling in event-based video classification using CNNs
Hesam Araghi (TU Delft)
Jan C van Gemert (Delft University of Technology)
Nergis Tomen (Delft University of Technology)
Vibration Vision: Real-Time Machinery Fault Diagnosis with Event Cameras
Muhammad Aitsam (Sheffield Hallam University)
Gaurvi Goyal (Italian Institute of Technology)
Chiara Bartolozzi (Istituto Italiano di Tecnologia)
Alessandro Di Nuovo (Sheffield Hallam University)
S-ROPE: Spectral Frame Representation of Periodic Events
Luis Garcia Rodriguez (University of Münster)
Jonas Konrad (University of Münster)
Dominik Drees (University of Münster)
Benjamin Risse (University of Münster)
Autobiasing Event Cameras
Mehdi Sefidgar Dilmaghani (University of Galway)
Waseem Shariff (University of Galway)
Cian Ryan (Xperi)
Joseph Lemley (Tobii, Galway, Ireland)
Peter M Corcoran (National University of Ireland, Galway)
Recent Event Camera Innovations: A Survey
Bharatesh Chakravarthi (Arizona State University)
Aayush Atul Verma (Arizona State University)
Kostas Daniilidis (University of Pennsylvania)
Cornelia M Fermuller (University of Maryland, College Park)
Yezhou Yang (Arizona State University)
EvDownsampling: A Robust Method For Downsampling Event Camera Data
Bharatesh Chakravarthi (Arizona State University)
Aayush Atul Verma (Arizona State University)
Kostas Daniilidis (University of Pennsylvania)
Cornelia M Fermuller (University of Maryland, College Park)
Yezhou Yang (Arizona State University)
Abstract papers
Neuromorphic Line Detection as Event Data Preprocessing
Amélie Gruel (IMS laboratory, University of Bordeaux)
Pierre Lewden (CNRS@CREATE)
Adrien F. Vincent (IMS laboratory, University of Bordeaux)
Sylvain Saïghi (IMS laboratory)
Active Fixation as an Efficient Coding Strategy for Neuromorphic Vision
Simone Testa (University of Genoa)
Silvio P. Sabatini (University of Genova)
Andrea Canessa (University of Genova)
Low-power, Continuous Remote Behavioral Localization with Event Cameras
Friedhelm Hamann (TU Berlin)
Suman Ghosh (TU Berlin)
Ignacio Juarez-Martinez (University of Oxford)
Tom Hart (Oxford Brookes)
Alex Kacelnik (Oxford University)
Guillermo Gallego (TU Berlin)
Event-based Mosaicing Bundle Adjustment
Shuang Guo (TU Berlin)
Guillermo Gallego (TU Berlin)
Shuang Guo (TU Berlin)
Guillermo Gallego (TU Berlin)
Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation
Friedhelm Hamann (Technical University Berlin)
Ziyun Wang (University of Pennsylvania)
Ioannis Asmanis (University of Pennsylvania)
Kenneth Chaney (University of Pennsylvania)
Guillermo Gallego (TU Berlin)
Kostas Daniilidis (University of Pennsylvania)
Two Tales of Single-Phase Contrastive Hebbian Learning
Rasmus K Høier (Chalmers University of Technology)
Christopher Zach (Chalmers University of Technology)
Event-based Shape from Polarization
Manasi Muglikar (University of Zurich)
Event-Based Background-Oriented Schlieren
Shintaro Shiba (Keio University)
Friedhelm Hamann (Technical University Berlin)
Yoshimitsu Aoki (Keio University)
Guillermo Gallego (TU Berlin)
Shintaro Shiba (Keio University)
Yannick Klose (TU Berlin)
Yoshimitsu Aoki (Keio University)
Guillermo Gallego (TU Berlin)
Luca Bartolomei (Università di Bologna)
Matteo Poggi (University of Bologna)
Andrea Conti (University of Bologna)
Stefano Mattoccia (University of Bologna)
Measuring Cognitive Load Through Event Camera Based Human-Pose Estimation
Muhammad Aitsam (Sheffield Hallam University)
Gaurvi Goyal (Italian Institute of Technology)
Chiara Bartolozzi (Istituto Italiano di Tecnologia)
Alessandro Di Nuovo (Sheffield Hallam University)
16:40 - 17:20 [20 min] Rising Star Researcher Keynotes
Presenter: Giulia D'Angelo [SLIDES]
Title: What's catching your eye: Bioinspired and neuromorphic algorithms to model visual attention
Abstract: Vision is an exploratory behaviour that relies heavily on the dynamic relationship between actions and sensory feedback. For any agent—whether animal or robotic—processing visual sensory input efficiently is crucial for understanding and interacting with its environment. The key challenge lies in selectively filtering relevant information from the constant stream of complex sensory data. This process, known as selective attention, is also driven by the intricate interplay between bottom-up and top-down mechanisms, which together organize and interpret visual scenes. The talk will explore how biologically plausible models for visual attention can enhance robotic interaction with the environment trying to understand the role of neuromorphic hardware in facilitating active vision and its limitations.
Presenter: Rui Graça [SLIDES]
Title: SciDVS: A Scientific Event Camera
Abstract: SciDVS is a novel event camera camera targeting scientific applications, able to achieve contrast thresholds down to 1.7% even in a dim indoor scene of 20 lx. The increase in sensitivity was achieved by exploiting the physical limits of the DVS pixel by improving control over temporal and spatial integration of light. This talk will overview the physical phenomena affecting the performance of event cameras regarding sensitivity, noise, and temporal response, as well as discuss how a better understanding of these physical limitations can be useful for a more accurate interpretation of generated events. The talk will also present how a better understanding of the physics of the DVS pixel lead to the optimized SciDVS pixel.
17:20 - 17:50 [30 min] Ask the speakers (Combined Q/A)
17:50 - 18:00 [10 min] Closing remarks + Best Paper Award