Challenges

Challenges

The workshop consists of three challenges. The winner and runner-up of each challenge will receive a monetary prize (the exact prizes will be announced soon). The challenges are:

  1. ​Outdoor visual localization​

  2. Indoor visual localization

  3. ​Place recognition / image retrieval

The goal of the first two challenges is to develop full visual localization pipelines that take an image as input and estimate its camera pose with respect to the scene. The challenges differ by the scenes that are considered and thus the assumptions that can be made by the competing algorithms.

For the case that participants want to focus on individual parts of a localization pipeline, e.g., local features or global image-level descriptors, we provide a modular pipeline for structure-based visual localization that can be easily adapted. Please find example scripts (ready to use with a pre-built docker container) for Aachen Day-Night v1.1, Gangnam Station, Hyundai Department Store, ETH-Microsoft, and RIO10, here.

In contrast to the first two challenges, which focus on precise camera pose estimation, the place recognition challenge aims at benchmarking the effectiveness of the image retrieval / place recognition stage employed in many modern visual localization algorithms. The task for this challenge is to determine the place depicted in a given test image by retrieving an image taken from a closeby viewpoint from a large database of geo-tagged images.

Please note that compared to previous iterations of this workshop, there are significant changes. Please read the following very carefully.

Leaderboard (last update: Oct. 8th, 10:13 CEST)

Outdoor visual localization

Outdoor Localization Challenge

Indoor visual localization

For RIO10, the DCRE (0.05) and DCRE (0.15) metrics are used.

Indoor Localization Challenge

Place Recognition / Image Retrieval

Please see https://competitions.codalab.org/competitions/34623

Datasets

The following datasets will be used for the outdoor ​visual localization challenge:

The following datasets will be used for the ​indoor visual localization​ challenge:

The following dataset will be used for the ​place recognition / image retrieval​ challenge:


We plan to provide all datasets in the kapture format (https://github.com/naver/kapture). The datasets marked with (*) can be downloaded via the kapture dataset downloader. The other datasets are available through the websites linked above.

We will announce the availability of datasets to registered participants per email.

Evaluation

For the indoor and outdoor localization challenges, we evaluate the pose accuracy of a method. To this end, we follow [Sattler et al., Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions, CVPR 2018] and define a set of thresholds on the position and orientation errors of the estimated pose. For each (X meters, Y degrees) threshold, we report the percentage of query images localized within X meters and Y degrees of the ground truth pose. This evaluation metric will be used for the Aachen, RobotCar, SILDa, Gangnam Station, Department Store, and SimLocMatch datasets. For the RIO10 dataset, we use the evaluation metrics from [Wald et al., Beyond Controlled Environments: 3D Camera Re-Localization in Changing Indoor Scenes, ECCV 2020] based on densely measured reprojection errors.

For ranking the methods over all datasets and metrics, we follow the Robust Vision Challenge from CVPR 2018: Each metric for a dataset provides a partial ranking of the methods, e.g., a ranking based on the percentage of query images localized within 0.25m and 2 degrees for the Aachen nighttime queries. We then use the Schulze Proportional Ranking method from [Markus Schulze, A new monotonic, clone-independent, reversal symmetric, and condorcet-consistent single-winner election method, Social Choice and Welfare 2011] to obtain a global ranking over all datasets and metrics that is as consistent as possible with the partial rankings. Note that this approach allows researchers to participate in a challenge without submitting results for each dataset: if the results of a method are not available for a dataset, the comparison will assume that it performs worse than a method for which the results are available.

For the place recognition challenge, we will evaluate image retrieval methods on the MSLS dataset. Methods will be ranked based on the Recall@k, i.e., the percentage of query images for which at least one relevant reference image is contained in the top-k retrieved images, for varying values of k. The Recall@k metric is a standard metric in the area of place recognition. Again, we will use the Schulze method to obtain a globally consistent ranking of all methods for all values of k.

Participation

In previous iterations of the workshop, all challenge submissions were evaluated at https://visuallocalization.net/ . This year, different evaluation services will be used:

A leaderboard for approaches participating in the challenge will be made available on this website. The leaderboard will be updated asynchronously to submissions to individual benchmarks, but at least once a week.

In order to participate, you will need to:

  • Choose a name for your method for the challenge you are submitting to. The same name has to be used when submitting results for each dataset. The name will be used by us to link results between datasets.

  • Register using this form: https://forms.gle/V7t51GrPxrMPMKEg8 . Once you are registered, we will send you updates once datasets become available or in case anything changes.

  • Submit your results for the individual datasets. Note that only results that are publicly visible on the benchmark websites will be used for the leaderboard.

By submitting to the challenges, you agree to adhere to the following rules.

Rules

  • By submitting to a workshop challenge, you agree to give a talk about your method at the workshop if your method is a winner or runner-up in one of the challenges. The talk will be recorded and made publicly available.

  • If you are an author of a paper related to the challenges, we strongly encourage you to evaluate your method on the challenge datasets and to submit your results to one or more of the challenges. If you already have results on some of the datasets, we strongly encourage you to also submit your results to the challenges. Besides novel work on the topic, we also encourage the following types of submissions:

    • Combinations of existing methods, e.g., using SuperPoint features in a localization approach implemented in Colmap, a state-of-the-art feature matching algorithm in combination with local features such as SuperPoint or D2-Net features, or exchanging the components of existing algorithms to boost performance.

    • Submissions showing that existing methods can outperform methods with results published on the benchmarks, e.g., by carefully tuning parameters or using a different training dataset.

    • Combining existing methods with pre- or post-processing approaches, e.g., using histogram equalization on the input images, building better 3D models (for example through model compression or the use of dense 3D models), or integrating an existing localization algorithm into a (visual) Odometry / SLAM system.

    • Using matches obtained by an existing method for multi-image localization.

    • Showing that existing methods work well on our challenges, even though the community believes that they do not work.

  • We will not consider methods of the following type: Reproducing results already published on one of the benchmark websites by running someone else's code out of the box (if you are not a co-author of the underlying method) or using your own implementation. However, re-implementations that outperform the existing one are explicitly encouraged.

  • Using additional data, e.g., for training is explicitly permitted. For example, one could use other nighttime images from the RobotCar dataset (not used in the RobotCar Seasons dataset) to train descriptors. Training on the test images is explicitly forbidden. You will need to explicitly specify which data was used for training.

  • One member (or representative) of the winner and runner-up teams of each challenge needs to attend the workshop and give a talk about their approach.

  • Each team can update its challenge results until the deadline.

  • We explicitly encourage participation from industry.

Deadlines

  • Challenge submission opens: July 30th

  • Challenge submission deadline: October 7th

  • Notification: October 11th