The ROAD++ dataset

About the Dataset

ROAD++ is an extension of oure previous realeased dataset ROAD dataset  with a even larger number of multi-label annotated videos from Waymo dataset. Such videos span an even wider range of conditions, over different cities located in the United States. Given that ROAD++ encompasses videos from both the United Kingdom and the United States, it can be used as a benchmark not only for action detection models, but also for domain adaptation models. In the future, we plan to further extend it to novel cities, countries and sensor configurations, with the long term goal of creating a even more robust, “in the wild” setting.

ROAD dataset was build upon Oxford RobotCar dataset Dataset. Please cite the original dataset if it useful in your work, citation can be found here. It is released with a paper and 3D-RetinaNet code as a baseline. Which also contains evaluation code. 

ROAD++ (the new version itself) is of significant size, as 1000 videos are labelled for a total of ∼ 4.6M detection bounding boxes in turn associated with 14M unique individual labels, broken down into 3.9M agent labels, 4.3M action labels, and 4.2M location labels. The ROAD++ version is also following the same principles of ROAD, especially multi-labels, multi-instance dataset.

ROAD++ is the result of annotating ~55k carefully selected, relatively long-duration (20 second each) videos from the Waymo dataset in terms of what we call road events (REs), as seen from the point of view of the autonomous vehicle capturing the video. REs are defined as triplets E = (Ag;Ac; Loc) composed by a moving agent Ag, the action Ac it performs, and the location Loc in which this takes place. Agent, action and location are all classes in a finite list compiled by surveying the content of the 55k videos. Road events are represented as ’tubes’, i.e., time series of frame-wise bounding box detections
The dataset was designed according to the following principles:
  • A multi-label benchmark: each road event is composed by the label of the (moving) agent responsible, the label(s) of the type of action(s) being performed, and labels describing where the action is located. Each event can be assigned multiple instances of the same label type whenever relevant (e.g., an RE can be an instance of both moving away and turning left). 
  • The labelling is done from the point of view of the AV: the final goal is for the autonomous vehicle to use this information to make the appropriate decisions. The meta-data is intended to contain all the information required to fully describe a road scenario. After closing one’s eyes, the set of labels associated with the current video frame should be sufficient to recreate the road situation in one’s head (or, equivalently, sufficient for the AV to be able to make a decision). 
  • ROAD++ allows one to validate manifold tasks associated with situation awareness for self driving, each associated with a label type (agent, action, location) or combination thereof: spatiotemporal (i) agent detection, (ii) action detection, (iii) location detection, (iv) agentaction detection, (v) road event detection, as well as the (vi) temporal segmentation of AV actions. For each task one can assess both frame-level detection, which outputs independently for each video frame the bounding box(es) (BBs) of the instances there present and the relevant class labels, and video-level detection, which consists in regressing the whole series of temporally-linked bounding boxes (i.e., in current terminology, a ’tube’) associated with an instance, together with the relevant class label.

Demos

Main features


Download


Please follow the instructions at https://github.com/salmank255/Road-waymo-dataset