ICPR 2026 Competition on VISual Tracking in Adverse Conditions
(VISTAC-2)
28th International Conference on Pattern Recognition (ICPR), December 17-08, 2026, French
ICPR 2026 Competition on VISual Tracking in Adverse Conditions
(VISTAC-2)
28th International Conference on Pattern Recognition (ICPR), December 17-08, 2026, French
ExtremeTrack Dataset is now available: Click the link
Introduction
Competition Schedule
The evolution of video databases is crucial in understanding complex spatiotemporal dynamics and extracting semantic content from video data. Understanding "Spatiotemporal Semantic Content" means deciphering how objects and actions within a scene evolve in meaning and significance across time and space. This is vital in video surveillance, where interpreting physical movements and contextual behaviors—such as flagging suspicious activities based on temporal patterns—is paramount.
Adverse weather conditions in video analysis present unique challenges due to reduced visibility from haze, rain, and other degradations that obscure key details. Sophisticated algorithms are essential for extracting actionable insights in these difficult scenarios. These advancements directly benefit surveillance, security, and autonomous navigation systems. Despite the progress fueled by deep learning and large datasets, a critical gap exists: the lack of specialized, publicly accessible datasets focused on adverse weather conditions. To address this need, we introduce the ExtremeTrack dataset, comprising 188 real-world videos (96 hazy and 92 rainy) with detailed annotations, providing accurate ground truth data for object tracking in degraded environments.
Building on the success of the first VISTAC (VISual Tracking in Adverse Conditions) challenge at ICPR 2024, which focused on nighttime infrared video tracking, VISTAC-2 extends the scope to object tracking under adverse weather conditions. Despite advancements in tracking under well-lit settings, algorithm performance often degrades in challenging environment scenarios involving haze and rain. To address this, VISTAC-2 introduces the ExtremeTrack dataset, featuring 188 real-world videos (96 hazy and 92 rainy) with detailed annotations. The challenge aims to benchmark and promote the development of robust and resilient tracking algorithms capable of maintaining accuracy and temporal consistency in degraded environments, supporting progress in surveillance, intelligent transportation, and autonomous vision systems.
The ExtremeTrack dataset can significantly impact the field of video analysis under adverse conditions. By providing a high-quality, specialized dataset, researchers can develop and refine algorithms tailored to the unique challenges of hazy and rainy scenarios. This will directly contribute to the improvement of surveillance and navigation systems in real-world degraded environments. We also present the qualitative precision (QP) metric to establish a new benchmark for evaluating machine learning-based object-tracking algorithms. QP is designed to assess the accuracy and reliability of algorithms operating within the demanding context of adverse weather video analysis. This initiative will drive progress by giving researchers a robust tool to evaluate and enhance their technological capabilities.
2025/12/25: Registration Opens
2026/03/13: Training Data Release
2026/03/05: Registration Close
2026/03/20: Test Data Release
2026/03/25: Deadline for test results and method descriptions report submission
2026/03/29: Announcement of the final decision
Registration Link for VISTAC
Awards of VISTAC Challange
We will award the top 3 participating teams with certificates from the ICPR 2026 committee.
The top 5 teams will be invited as authors to contribute to the competition summary paper, which will be included in the proceedings of ICPR 2026.
Dataset Link : ExtremeTrack
Competition Outline
The VISTAC (VISual Tracking in Adverse Conditions) challenge returns for its second edition at the 28th International Conference on Pattern Recognition (ICPR) 2026. The primary goal of the competition is to advance the state of the art in object tracking, particularly in environments where traditional tracking algorithms struggle. While significant progress has been made in object tracking within controlled or well-lit settings, much less research has been conducted on tracking performance under challenging weather conditions such as haze and rain. VISTAC-2 seeks to address this gap by providing a platform for the development of robust tracking algorithms that can handle degraded video quality due to environmental factors like haze, rain, and other adverse conditions.
This challenge introduces a new dataset, ExtremeTrack, designed specifically to help improve tracking under these harsh conditions. The dataset contains 188 real-world videos: 96 videos in hazy conditions and 92 in rainy conditions. These videos will serve as the foundation for evaluating new tracking algorithms and assessing their ability to handle the visual degradation commonly associated with adverse weather.
● Training Data: A total of 128 videos (66 in hazy conditions and 62 in rainy conditions) will be released for training purposes on march 12 , 2026. These videos come with ground truth annotations for the tracking task.
● Validation Data: 20 videos (10 in hazy conditions and 10 in rainy conditions) will be provided with ground truth annotations to assist participants in fine-tuning their algorithms.
● Test Data: 40 additional videos (20 in hazy and 20 in rainy conditions) will be released without ground truth on March 20, 2026. These videos will be used to evaluate the tracking performance of participants in real-world scenarios.
The ground truth annotations for the dataset will be in the form of bounding boxes for each object, where each bounding box is defined by its top-left (X1, Y1) and height-width (h, w) coordinates. This provides the exact location of the tracked object in each frame, allowing for precise evaluation of the tracking results.
The VISTAC-2 competition is organized into three distinct tasks, each focusing on different adverse weather conditions to evaluate the robustness of tracking algorithms using the Qualitative Precision (QP) metric:
Task 1 – Hazy-Condition Tracking:
In this task, participants will train their models exclusively on the hazy subset of the ExtremeTrack dataset. The QP metric will be evaluated only on the hazy test videos, measuring the algorithm’s performance under hazy conditions.
Task 2 – Rainy-Condition Tracking:
Participants will train their models using only the rainy subset of the ExtremeTrack dataset. The QP metric will then be computed exclusively on the rainy test videos, assessing performance under rainy conditions.
Task 3 – Combined-Condition Tracking:
This task evaluates algorithms trained on the entire combined ExtremeTrack dataset (both hazy and rainy videos). The QP metric will be calculated on the full test set, testing the algorithm’s ability to generalize across multiple adverse weather conditions.
The dataset provides two annotation files:
ExtremeTrack_train.json
ExtremeTrack_val.json
Note:
Test annotations follow the same format but are not publicly available.
Frames are organized by weather condition and dataset split.
HAZY/
Train_Set/
video1/
video2/
Val_set/
video3/
RAIN/
Train_set/
video4/
Val_set/
video5/
Each video is stored in its own folder.
Frames may be stored:
directly inside the video folder, or
inside an img/ folder.
Example:
video_name/
0001.jpg
0002.jpg
or
video_name/
img/
0001.jpg
0002.jpg
Each annotation JSON file is a dictionary:
{
sequence_id : sequence_object
}
Format:
video_name_condition
Condition can be:
Haze
Rain
Example:
Car01_Haze
Bike03_Rain
Each sequence contains the following information:
video_dir
Name of the video folder.
img_names
Ordered list of frame paths.
gt_rect
Ground truth bounding box for each frame.
init_rect
Initial bounding box (same as the first ground truth box).
attr
Optional attributes or tags.
All bounding boxes use the format:
[x, y, w, h]
Where:
x = top-left x coordinate
y = top-left y coordinate
w = width
h = height
All values are in pixels.
Each frame matches its annotation by index:
frame i → img_names[i]
frame i → gt_rect[i]
So each frame has one corresponding bounding box.
The test set ground truth is hidden.
Participants must submit predicted bounding boxes only.
Submit a single JSON file named:
ExtremeTrack_submission.json
The submission JSON should look like this:
{
video_name : prediction_object
}
video_name must match the test video folder name.
Each video must contain:
pred_rect
List of predicted bounding boxes (one per frame).
Optional:
confidence
Confidence score for each frame.
Bounding box format:
[x, y, w, h]
Do not include ground truth information in the submission.
Do not include:
gt_rect
init_rect
img_names
The evaluation system will automatically match predictions with the test frames.
{
"BlurCar1": {
"pred_rect": [
[250, 168, 106, 105],
[249, 170, 107, 104]
]
},
"Goat": {
"pred_rect": [
[561, 326, 341, 469],
[560, 325, 342, 470]
],
"confidence": [0.93, 0.92]
}
}
**Please note that the proposed model was trained solely on the training set of the ExtremeTrack dataset, and fine-tuning was not permitted.
Fig 1: Task Chart of the competition
Competition Objectives
The ICPR 2026 Competition on Visual Tracking in Adverse Conditions (VISTAC-2) aims to advance the state of the art in robust object tracking by addressing the challenges posed by visually degraded environments such as haze and rain. Building on the success of the first edition of VISTAC, this second version focuses on expanding the dataset, improving evaluation standards, and fostering greater collaboration within the research community.
Present and release the newly developed ExtremeTrack dataset, specifically designed for single-object tracking in adverse weather conditions. The dataset contains 188 videos (96 hazy and 92 rainy), accompanied by carefully annotated ground truths for training and validation. This dataset provides a unique opportunity for researchers to develop and test tracking algorithms in real-world, visually degraded environments.
Encourage the development of robust and adaptive tracking algorithms capable of handling visibility degradation caused by environmental factors such as haze, rain, and motion blur. The competition aims to highlight the limitations of current approaches and inspire new strategies that can generalize better across diverse and challenging conditions.
Define a standardized benchmarking framework for evaluating and comparing deep learning–based tracking algorithms using the provided dataset. By introducing consistent evaluation protocols, VISTAC-2 aims to promote transparency, reproducibility, and meaningful comparison of algorithmic performance across different methods.
For queries and suggestions, contact us at: nvisot.ju.etce@gmail.com