Synthetic 3D Lanes Dataset

Synthetic-3D-lanes is a dataset designed to train and validate methods for detection of lanes and center-lines in full 3D. It is a procedural randomly generated dataset of highway scenes with a large variability in road geometry and elevation. It is fully described in our ICCV`19 paper: 3D-LaneNet: End-to-End 3D Multiple Lane Detection . Here are some examples of images from the dataset:

Downloading the dataset. Download the dataset using this link. To download the dataset you'll have to have a Microsoft live account and request access for your account mail to the dataset by mailing dan.levi@gm.com

Dataset Details. see also README.txt in link

There are two distinct datasets:

  1. Folder "paper_db" contains synthetic data used for ablation study described in the paper: 300K train images, 770 validation images, 5K test images.

  2. Folder "with_ego_car_pos_variance" contains data with different ego car position and orientation (not necessarily centered and aligned with lanes): 117K Train images, 1K validation images.


Each image has a corresponding annotation file, with 3d landmark points (meters), 3d path points (meters), camera height (meters), camera pitch (degrees) and camera intrinsic parameters.


Coordinates are defined as: x-rightward, y-forward, z-upward, and camera intrinsic matrix defined accordingly.


See display_lanes_data.py for an example of reading and displaying data.

If you find this dataset useful in your research, please cite our paper:

@InProceedings{Garnett_2019_ICCV,

author = {Garnett, Noa and Cohen, Rafi and Pe'er, Tomer and Lahav, Roee and Levi, Dan},

title = {3D-LaneNet: End-to-End 3D Multiple Lane Detection},

booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},

month = {October},

year = {2019}

}