NEW : ROAM CHALLENGE 1 LINK RELEASED AUGUST 7th : https://huggingface.co/spaces/Artificio/ROAM1RealWorldAdversarialAttack
NEW : ROAM CHALLENGE 2 LINK RELEASED AUGUST 5th : https://huggingface.co/spaces/Artificio/ROAM2FewShotChallenge
Important Dates [New! Updated August 7th, 2024]
August 1st , 2024 : Challenge Portals Open to begin submissions.
August 16th, 2024 : Last Day of Submissions (New Submission deadline: September 20th)
August 20th, 2024 : Challenge 1 & 2 Winners Announced. (New winners announcement date: September 23th)
The Real-world Adversarial Attack from ROAM challenge addresses the critical issue of deploying deep learning (DL) systems in environments where images may be intentionally adversarial. This challenge emphasizes the importance of developing robust detection systems capable of handling real-world scenarios that mimic adversarial attacks.
Challenge Description
Participants will evaluate their adversarial attacks displayed in a specially curated dataset for autonomous driving against non-adversarial object detectors. This dataset, comprising diverse scenarios from various regions, includes unique geometrical and semantic challenges not typically present in conventional datasets like KITTI, Waymo, nuScenes, or Cityscapes. The task is to submit a batch of attacks in the form of printable patches that induce misdetection.
Key Challenge Details
Initial Training Data: Models may be trained using data from KITTI, Waymo, nuScenes, Cityscapes, or any other external sources such as simulators.
Patch specification: The color patches will have a square shape of 128 x 128 x 3 (H x W x C) pixels and can be trained to perform targeted or untargeted attacks.
Techniques: All techniques of adversarial color patch attack training are encouraged, particularly those who achieve transferability.
Data Release: The specific target domain will not be publicly disclosed until the end of the challenge.
Evaluation Metrics
The two metrics for this challenge are the False Positive (FP) and False Negative (FN) rates over the Car and Pedestrian classes reported from 5 different object detectors after painting the adversarial patch in the test images. The detectors’ architecture will be unknown to the participants for the sake of replicating the most real black-box scenario.
Dataset, Tools and Submission
The participants will submit 4 patches through the workshop’s Hugging Face workspace. Each patch must maximize the metric failure rate per class (FP for car, FP for pedestrian, FN for car, and FN for pedestrian). After evaluating detection performance over the adversarial images internally, the results will be available in the Hugging Face Ranking table.
Real world evaluation
The patches from the Top-3 ranking submissions are going to be printed in the real-world context similar to test images to assess its transferability to different camera lenses, illumination and semantic content.
Participation
Researchers and developers interested in autonomous driving technologies are encouraged to participate. Registration details are available on the ECCV workshop's official page. Participants are not required to submit a paper to the workshop nor to the conference. Keep in mind that if you are selected among the Top-3 ranking, you will have to submit a technical report of 3-4 pages of the training process with the opportunity of an oral presentation in the workshop.
Challenge Description
Participants will utilize models initially trained on the ImageNet-1K dataset. The challenge involves fine-tuning these models using only eight support images in a few-shot learning setup. The task is split into two distinct groups:
Easily Distinguishable Classes: For example, tuk-tuks, which are distinct from other vehicles in their appearance and function.
Sub-Groups of Common Classes: For example, fuel-transporting trucks, which require specific recognition due to unique regulatory requirements in traffic, such as maintaining a greater distance from these vehicles.
The goal is for models to effectively recognize and classify images into these specific categories with high precision, using the provided support set.
Key Challenge Details
Initial Training Data: ImageNet-1K.
Few-shot Learning: Fine-tuning with only eight support images.
Application Focus: Autonomous driving, with emphasis on safety and regulatory compliance for specific vehicle types.
Allowed Techniques: Techniques that address out-of-distribution samples and adversarial training are permitted, provided that there's no exposure to the target domain.
Restrictions: The use of large language models (LLM) is prohibited due to the difficulty in verifying their training data domains.
Evaluation Metrics
The primary metric for this challenge will be Accuracy, assessing how effectively the models can classify new, specific vehicle types based on minimal prior exposure.
Dataset and Tools
Details about the dataset, including the specific classes for the few-shot training phase, will be released three days to one week before the submission deadline. This setup tests the models' adaptability and precision in real-world driving scenarios.
Participation
Researchers and developers interested in autonomous driving technologies are encouraged to participate. Registration details are available on the ECCV workshop's official page. Baseline models and a validation toolkit will be provided to facilitate effective competition.