Benchmark
The link to data downloading is here: Dropbox
Our 6D object pose estimation benchmark is evaluated on 32 image sequences of the test set whose annotations are held out. The information of train-validation-test splits can be found in the split/ directory in the downloaded data after extracting tar archives.
To obtain the performance on the test set, please submit your results to StereOBJ-1M challenge on EvalAI. For the instructions for submission, please refer to the Submission section.
Evaluation
The main evaluation metric we use is ADD(-S). When computing ADD distance, we transform the model point set by the predicted and the ground truth poses respectively, and compute the mean 3D Euclidean distance between the two point sets. For symmetric objects, ADD-S is used instead. When computing ADD-S distance, the 3D distances are calculated as the average of each point’s closest distance to the other point set.
We use the following two evaluation metrics. (1) ADD(-S) accuracy: ADD(-S) accuracy measures the proportion of correct pose predictions. A pose prediction is considered correct if the ADD(-S) distance is less than the threshold of 10% of the model’s diameter. (2) ADD(-S) AUC: the area under ADD(-S) accuracy-threshold curve where the maximum threshold is set to 10cm.
Please refer to our paper or our code for more details about the evaluation metrics.
Leaderboard
The leaderboard of our challenge is here.