All 3D datasets and annotations have the same format, and all submissions must respect the annotation format, as described here.
The training datasets were released on May 15, 2017 and the test data sets were released May 29, 2017.
The goal is to identify the major symmetries for each object. For the global symmetry dataset the reflections are valid for the whole scene, while for the local symmetries the symmetry support is a region of limited size. A secondary goal is to also estimate the symmetry support region in form of a object aligned bounding box.
Note that the symmetries might not always be perfect and we allow for small deviations, e.g. the exhaust pipes and the steering wheel of a car are only on one side of the car while all other components are perfectly symmetric.
For the local symmetry dataset, the models are randomly selected and composed into a scene with multiple objects.
Sungjoon Choi, Qian-Yi Zhou, Vladlen Koltun, "Robust Reconstruction of Indoor Scenes", CVPR 2015
Sungjoon Choi, Qian-Yi Zhou, Stephen Miller, Vladlen Koltun, "A Large Dataset of Object Scans", Technical Report, arXiv:1602.02481, 2016
Please cite the corresponding papers if you use these datasets.
If you use any of the labeled or generated local symmetry data, please also cite the corresponding workshop paper: