First International Workshop on “AI-based All-Weather Surveillance System", AWSS 2024, in conjunction with ACCV 2024, December 9, Hanoi, Vietnam.
Workshop Description
Advances in computer vision and the falling costs of camera hardware have allowed the massive deployment of cameras for monitoring physical premises. The extensive deployment of fixed and movable cameras for control and safety has resulted in visual data collection for online and post-event analysis. However, different environmental conditions such as haze or fog, snow, dust, raindrops, and rain streaks degrade the perceptual quality of the data, eventually affecting the architecture performance on high-level computer vision tasks such as change detection, object detection, traffic monitoring, border surveillance, behavior analysis, video synopsis, action recognition, anomaly detection, and object tracking, motion magnification, etc. In literature, different modeling methods based on deep learning (CNNs, GNNs) and graph signal processing concepts have been employed to address the challenges of weather-specific applications (either removal of rain, fog, snow, or haze) only. Nevertheless, only few algorithms allow to handle these multi-weather conditions with a unified network. Moreover, these algorithms require high computational complexity, which leads to poor inference performance in real-world scenarios, and also are most-of-the time not suitable in unseen scenarios. In addition, very few algorithms are available for simultaneous image/video restoration and static/moving object detection in these challenging multi-weather scenarios.
Most of the time, these algorithms employ two-stage architectures to address these challenges. In the first stage, an application-specific image/video degrading algorithm is applied, and in the second stage, high-level video processing tasks such as static/moving objects are detected. Thus, there is an immense need to design and develop end-to-end unified learning architectures which restore the image/videos and detect the static/moving objects under sparse to extreme multi-weather conditions.
Goal of This Workshop
The goals of this workshop are three-fold:
Designing unified framework that handles low- and high-level computer vision applications such as intelligent transportation, intelligent surveillance systems, conventional/aerial image or video enhancements.
Proposing new algorithms that can fulfil the requirements of real-time applications,
Proposing robust and interpretable deep learning to handle the key challenges in pattern in these applications.
Broad Subject Areas for Submitting Papers
Papers are solicited to address deep learning methods to be applied in based all-weather surveillance system,including but not limited to the following:
Graph Machine Learning for Computer Vision.
Transductive/Inductive Graph Neural Networks (GNNs), Generative Adversarial Networks (GANs), Transformers, etc.
GNNs Architectures, GANs Architectures, etc.
Zero-shot Learning, Few-shot Learning.
Graph Signal Processing for Computer Vision.
Graph Spectral Clustering for Computer Vision.
Ensemble Learning-based Methods.
Meta-knowledge Learning Methods.
RGB-D cameras, IR cameras, Event based cameras.
Video-surveillance, Passive Vision Monitoring for Surface and Underwater Scenes.
Important Dates
Full Paper Submission Deadline: September 20, 2024
Decisions to Authors: September 30, 2024
Camera-ready Deadline: October 11, 2024
Workshop Day: December 9, 2024
Selected papers, after extensions and further revisions, will be published in a special issue of an international journal.
Paper Submission
The link for the papers submission is https://cmt3.research.microsoft.com/AWSS2024
Paper Format and Lenght: Please see the ACCV 2024 guidelines.
Main Organizers
Associate Professor
School of Artificial Intelligence and Data Engineering
Indian Institute of Technology Ropar (IIT Ropar) , India.