The ROAD++ dataset

About the Datasets

ROAD++ is an extension of our previously released dataset ROAD dataset with an even larger number of multi-label annotated videos from the Waymo dataset, ROAD-UAE (UAE), and TACO (CARLA simulation). Such videos span an even wider range of conditions, over different cities located in the United States. Given that ROAD++ encompasses videos from both the United Kingdom and the United States, it can be used as a benchmark not only for action detection models but also for domain adaptation models. In the future, we plan to further extend it to novel cities, countries, and sensor configurations, with the long-term goal of creating an even more robust, “in the wild” setting.

The ROAD-Waymo Dataset

ROAD dataset was built upon the Oxford RobotCar dataset Dataset. Please cite the original dataset if it useful in your work, citation can be found here. It is released with a paper and 3D-RetinaNet code as a baseline. Which also contains an evaluation code. 

ROAD++ (the new version itself) is of significant size, as 1000 videos are labeled for a total of ∼ 4.6M detection bounding boxes in turn associated with 14M unique individual labels, broken down into 3.9M agent labels, 4.3M action labels, and 4.2M location labels. The ROAD++ version is also following the same principles of ROAD, especially the multi-label, multi-instance dataset.

ROAD++ is the result of annotating ~55k carefully selected, relatively long-duration (20 seconds each) videos from the Waymo dataset in terms of what we call road events (REs), as seen from the point of view of the autonomous vehicle capturing the video. REs are defined as triplets E = (Ag;Ac; Loc) composed by a moving agent Ag, the action Ac it performs, and the location Loc in which this takes place. Agent, action, and location are all classes in a finite list compiled by surveying the content of the 55k videos. Road events are represented as ’tubes’, i.e., time series of frame-wise bounding box detections
The dataset was designed according to the following principles:
  • A multi-label benchmark: each road event is composed by the label of the (moving) agent responsible, the label(s) of the type of action(s) being performed, and labels describing where the action is located. Each event can be assigned multiple instances of the same label type whenever relevant (e.g., an RE can be an instance of both moving away and turning left). 
  • The labeling is done from the point of view of the AV: the final goal is for the autonomous vehicle to use this information to make the appropriate decisions. The meta-data is intended to contain all the information required to fully describe a road scenario. After closing one’s eyes, the set of labels associated with the current video frame should be sufficient to recreate the road situation in one’s head (or, equivalently, sufficient for the AV to be able to make a decision). 
  • ROAD++ allows one to validate manifold tasks associated with situation awareness for self-driving, each associated with a label type (agent, action, location) or a combination thereof: spatiotemporal (i) agent detection, (ii) action detection, (iii) location detection, (iv) agentaction detection, (v) road event detection, as well as the (vi) temporal segmentation of AV actions. For each task, one can assess both frame-level detection, which outputs independently for each video frame the bounding box(es) (BBs) of the instances there present and the relevant class labels, and video-level detection, which consists of regressing the whole series of temporally-linked bounding boxes (i.e., in current terminology, a ’tube’) associated with an instance, together with the relevant class label.

Main features


The TACO Dataset

The TACO dataset is designed for the atomic activity recognition tasks The task offers an expressive description to ground the road topology in road users' action and can largely reduce the annotation cost compared to frame-wise action detection labeling. To overcome the long-tail distribution in the real world, we collect TACO in the CARLA simulator, which enables a more efficient collection for a diverse, balanced, and large-scale dataset. 

Annotation of atomic activity: given a short clip, we label a road user's action by defining the movement on the different regions at the intersection, An atomic activity class is formulated as:

 (region_start -> region_end: agent_types)

, where the regions denote the 4 roadways (Z1, Z2, Z3, Z4) and 4 corners (C1, C2, C3, C4) in an intersection and agent_type can be one of the 6 types: four-wheeler (C), Two-wheeler (K), Pedestrian (P), Grouped four-wheelers (C+), Grouped two-wheelers (K+), and Grouped pedestrians (P+).

Such a combination of movement on the defined road topology and the types of agents can compose 64 classes of atomic activities in total, which can be formulated as a multi-label action recognition task.

To this end, we collect the largest dataset for atomic activity recognition with 5178 clips and 16521 atomic activity labels. The balanced distribution also offers a more comprehensive analysis of models on diverse scenarios.

The images are collected with a size of 512x1536. The average number of frames in a clip is 109.3 with 20Hz.

Dataset usage

For the usage of images, we downsample images to a size of 256x768. 

For the training set, we augment the dataset by randomly sampling 16 frames from a clip.

For validation and test sets, we set a fixed sampling strategy to make sure the same image frames are used.

Please refer to the TACO dataset for more details.


Main features


Download


Please follow the instructions at TBA


TACO: [One Drive]