VIENA2
________________________________________________________
Virtual Environment for Action Anticipation
_______________________________________
VIENA2: A Driving Anticipation Dataset
Action anticipation is critical in scenarios where one needs to react before the action is finalized. This is, for instance, the case in automated driving, where a car needs to, e.g., avoid hitting pedestrians and respect traffic lights. While solutions have been proposed to tackle subsets of the driving anticipation tasks, by making use of diverse, task-specific sensors, there is no single dataset or framework that addresses them all in a consistent manner. In this paper, we therefore introduce a new, large-scale dataset, called VIENA2, covering 5 generic driving scenarios, with a total of 25 distinct action classes.
VIENA2 Scenarios
Scenario 1
Driver Manoeuvre
Moving forward, Stopping, Turning, Changing lanes
Scenario 2
Accidents
No accidents, accidents with pedestrians, cars, or assets
Scenario 3
Traffic Rules
Stopping or passing a red light, driving in correct or wrong direction, driving off-road
Scenario 4
Pedestrian Intention
No pedestrian, or crossing the road, walking alongside the road, or stopping
Scenario 5
Front Car Intention
Moving forward, Stopping, Turning, Changing lanes
Download VIENA2-v1
By downloading the files, you accept that you only use this dataset for academic, non-commercial use.
Steering information: Download
Speed information: Download
Scenario 1: Part1 (10GB), Part2 (10GB), Part3 (10GB), Part4 (10GB), Part5 (10GB), Part6 (10GB), Part7 (320MB)
Scenario 2: Part1 (10GB), Part2 (10GB), Part3 (10GB), Part4 (820MB)
Scenario 3: Part1 (10GB), Part2 (10GB), Part3 (10GB), Part4 (4.5GB)
Scenario 4: Download (28GB)
Scenario 5: Part1 (10GB), Part2 (10GB), Part3 (10GB), Part4 (10GB), Part5 (2GB)
Instructions to download VIENA2:
The dataset has partitioned tar files, therefore you need to download all of them first (for a particular scenario) and then concatenate them before extracting the files. You can do so by following the instruction below:
Create a directory for the dataset and the particular scenario:
mkdir viena2_dataset && cd viena2_dataset
mkdir Scenario1 && cd Scenario1
Next, to download the partitions of Scenario1, copy the download links and download it by executing
wget -O part1.tar.gz.partaa https://cloudstor.aarnet.edu.au/plus/s/uwSFIyiQVMrL6hz/download
Repeat the wget command to download all partitions of Scenario1. Make sure that you appropriately change the arguments passed to the -O option (e.g., part1.tar.gz.partab, part1.tar.gz.partac, … and so on).
Once you've downloaded all the partitioned tar files for Scenario1, you should concatenate all of them in a single *.tar.gz file, where * is the desired filename. You can do that by executing
cat part1.targ.gz.parta* > Scenario1.tar.gz
Note that this step might take a while to complete.
Finally, you can untar the newly created file by running
tar -xzf Scenario1.tar.gz
Note that his step may also take a few minutes, depending on the overall file size.
You can repeat these four steps for all scenarios.
Acknowledgement: Thanks to Ramashish for preparing this instruction.
Note: Due to large file size of the optical flow frames, please compute the optical flow of each video sample using the third-party code, provided by https://github.com/yjxiong:
git clone --recursive http://github.com/yjxiong/dense_flow
mkdir build && cd build
cmake .. && make -j
./extract_gpu -f test.avi -x tmp/flow_x -y tmp/flow_y -i tmp/image -b 20 -t 1 -d 0 -s 1 -o dir
Note: All the experiments in the paper is based on VIENA2-v1 (available for download). The dataset will be extended with more videos of the same scenarios and classes (will be released as VIENA2-v2).
If you use our dataset, please cite
@article{aliakbarian2018viena2,
title={VIENA2: A Driving Anticipation Dataset},
author={Aliakbarian, Mohammad Sadegh and Saleh, Fatemeh Sadat and Salzmann, Mathieu and Fernando, Basura and Petersson, Lars and Andersson, Lars},
booktitle={Asian Conference on Computer Vision (ACCV)},
year={2018}
}
Contact
For any question regarding the dataset, please contact sadegh.aliakbarian@anu.edu.au