Challenge
OmniLabel benchmark: Novel dataset with complex, free-form text descriptions of objects. Checkout https://www.omnilabel.org and https://arxiv.org/abs/2304.11463 for details
Train on public datasets and evaluate on our benchmark! We define three tracks to compete - A, B, and C. They differ in the allowed training data.
Test set and validation sets are available (v0.1.3). Use our evaluation server to test your model on both sets and participate in the challenge. Use our our toolkit to evaluate a model on the validation set by yourself.
$10,000 prize money will be distributed among the participants of the challenge!
Results
Please visit https://www.omnilabel.org/dataset/challenge-2023
Evaluation servers
Track A: https://codalab.lisn.upsaclay.fr/competitions/11868
Track B: https://codalab.lisn.upsaclay.fr/competitions/11870
Track C: https://codalab.lisn.upsaclay.fr/competitions/11871
Checkout the definition for each track on https://www.omnilabel.org/task
Timeline
02/07/2023 - Public release of benchmark data (validation set and evaluation toolkit)
03/28/2023 - Evaluation server goes online with the validation set
05/03/2023 - Test set release - evaluation server accepts submissions on the test set
05/26/2023 - Challenge closes
05/31/2023 - Deadline for submitting report (all challenge participants are required to provide a brief report about their method)
06/02/2023 - Challenge winners will be informed
06/18/2023 - Workshop at CVPR 2023 (half-day, morning session)
06/30/2023 - Open the benchmark to the public (submit results without participating in a challenge)