First International Workshop on
"Real-Time Implementationand Lightweight GNNs for Conventional and Event- based Cameras",
RT-GNNs 2025 in conjunction with ICIP 2025, Anchorage, Alaska, September 2025.
Workshop Description
Object classification and detection from a video stream captured by conventional cameras or event-based cameras is a fundamental step in applications such as visual surveillance of human activities, observation of animals and insect behaviors human-machine interaction and all kinds of advanced mobile robotics perceptions systems. A large number of graph neural networks applied for detection and classification of moving objects have been published outperforming conventional deep learning approaches. Many scientific efforts have been reported in the literature to improve their application in a more progressive way in applications where challenges are becoming more complex. But no algorithm is able to simultaneously address all the key challenges that are present in videos during long sequences as in the real cases.
However, the top background subtraction methods currently compared in CDnet 2014 are based on deep convolutional neural networks. But, their main drawbacks are computational and memory requirements, and also supervised aspects requiring labeling of a large amount of data. In addition, their performance decreases significantly in the presence of unseen videos. Thus, the current top algorithms are not practicable in real applications despite high performance regarding moving object detection.
In recent years, GNNs have also been increasingly used in object detection, object tracking, and mobile robot navigation. Their ability to model spatial and temporal dependencies makes them well-suited for these applications, especially in dynamic environments, where relationships between objects and scene elements must be continuously updated. However, real-time deployment of GNN-based solutions remains a challenge, as they often require significant computational resources, limiting their practicality in embedded and resource-constrained environments. Recently, only a few works have addressed real-time and lightweight GNN algorithms .
Goal of This Workshop
The goals of this workshop are thus three-fold:
1) Designing lightweight and practicable GNN algorithm that handles low- and high-level computer vision applications using conventional or event-based cameras.
2) Proposing new algorithms that can fulfil the requirements of real-time applications.
3) Proposing robust and interpretable graph learning to handle the key challenges in these applications.
Broad Subject Areas for Submitting Papers
Papers are solicited to address deep learning methods to be applied in image and video processing, including but not limited to the following:
Graph Signal Processing for Computer Vision
Graph Machine Learning for Computer Vision
Transductive/Inductive Graph Neural Networks (GNNs)
GNNs Architectures
Zero-shot Learning
Ensemble learning-based methods
Meta-knowledge Learning methods
RGB-D cameras
Event-based cameras
Hardware Architectures for Graph Processing
Important Dates
Workshop Paper Submission Deadline: 28 May 2025
Workshop Paper Acceptance Notification: 25 June 2025
Workshop Final Paper Submission Deadline: 2 July 2025
Workshop Author Registration Deadline : 16 July 2025
Paper Submission
The link for the papers submission is CMT3
Paper Format and Lenght: Please see the ICIP 2025 guidelines
Main Organizers
Main Organizer
Associate Professor (HDR)
Laboratoire MIA
La Rochelle Université,
France