Call for Participants

Overview

Current state-of-the-art object segmentation models focus on solving the problem under the closed-world setting which assumes that models are trained and evaluated on a predefined list of object categories. However, in many real-world applications, e.g., embodied AI or augmented reality assistants, models often encounter novel objects that they have never seen during training. Humans, on the other hand, can detect unfamiliar objects, such as novel music instruments or unknown sports equipment, even with no previous knowledge of them. Despite such unfamiliarity, people have no problem perceiving them as distinct object instances.

Open world segmentation aims at developing computer vision models that can detect and segment all objects that appear in images/videos regardless of their semantic concepts (either known or unknown). We believe that open-world segmentation is a foundation task that can potentially enable long-video understanding and open-world visual reasoning.

We are presenting the first open-world video object segmentation challenge which will be held at ICCV 2021. We warmly welcome your participation and hope we, together, can make good research progress in this challenging yet important problem of open-world segmentation.

Leaderboard: Track One (Frame Track)

Leaderboard: Track Two (Video Track)


Data Sources

Competition @ICCV21 will be based on UVO v0.5, which contains a sparse subset (UVO-Sparse) and a dense subset (UVO-Dense).

Tracks and Sub-tracks

We plan to provide 2 tracks and 2 sub-tracks in each track

  • Track 1: Image-based open-world segmentation

    • In this track, participants are expected to work on the UVO-Sparse set, which contains exhaustive annotations on objects on Kinetics frames at 1fps. Evaluation will be based on frame/ image results class-agnostically. Participants can optimally use UVO-Dense for their training if needed.

    • Sub-track 1.1: Participants are allowed to use only open-sourced and publicly accessible datasets, pre-trained models and/or annotations during training, or models pre-trained on such datasets, subjected to External Data Policy below.

    • Sub-track 1.2: There are no limitations on what data and annotations participants can use, subject to the External Data Policy below.

  • Track 2: Video-based open-world segmentation

    • In this track, participants are expected to work on the UVO-Dense set, which contains exhaustive annotations of objects in Kinetics videos. Different from UVO-Sparse, objects in UVO-Dense are annotated at 30fps and are tracked over the entire clips. Evaluation metric requires correct tracking of object instances. Participants can optimally use UVO-Sparse for their training if needed.

    • Sub-track 1.1: Participants are allowed to use only open-sourced and publicly accessible datasets, pre-trained models and/or annotations during training, or models pre-trained on such datasets, subjected to External Data Policy below.

    • Sub-track 1.2: There are no limitations on what data and annotations participants can use, subject to the External Data Policy below.

Timeline:

Evaluation Protocol

We adopted COCOApi for Track 1 and YTVOS version of COCOApi for Track 2. We use AverageRecall@100 as the main evaluation criteria and the leaderboard ranking will be based on this metric.

Rules and Awards

To be eligible for awards, participants need to submit a technical report detailing their methodologies.

We will be awarding top-2 winners on each sub-track. Winners will be invited to present their works at our ICCV 21 workshop. We will also be awarding 0-2 Most Innovative methods, subject to reviews by challenge organizers. Most Innovative award winners may or may not overlap with top-winners for each track. We reserve the right to not award any Most Innovative methods if there is no qualified approach.

Top-1 winners for each track and Most Innovative winners will receive prizes in the form of cloud computing credits.

Evaluation Server

Evaluation Server is available on EvalAI.

Technical Report

A technical report documenting the approach taken and ablation study (suggested length: 1-4 pages). This report will be made public. Only submissions with technical reports will be eligible for awards.

The authors must follow the ICCV 2021 submission policy. Papers are limited to four pages, including figures and tables, in the ICCV camera-ready style (https://iccv2021.thecvf.com/node/4#submission-guidelines). Additional pages containing only cited references are allowed. Please refer to the ICCV 2021 website for more information. The deadline for submission is October 13th, AoE. Please email the technical report to uvo.dataset@gmail.com



External Data Policy

Any data, pre-trained models or software used by participants in the challenge must be publicly accessible and used pursuant to a permissive open source license, or another valid license that permits use of the data for purposes of participation in a prize competition and otherwise in accordance with the competition Official Rules. Participants may be required to certify in writing that they have permission for all external training materials used to develop their challenge submission.

NO PURCHASE NECESSARY TO ENTER/WIN. A PURCHASE WILL NOT INCREASE YOUR CHANCES OF WINNING. Ends October 3, 2021 at 23:59:59 AoE. Open to legal residents of the Territory, 18+ & age of majority. "Territory" means any country, state, or province where the laws of the US or local law do not prohibit participating or receiving a prize in the Challenge and excludes Cuba, Crimea, North Korea, Iran, Syria, Venezuela and any other jurisdiction or area designated by the United States Treasury's Office of Foreign Assets Control. Void outside the Territory and where prohibited by law. Participation subject to Official Rules. See Official Rules for entry requirements, judging criteria and full details. Winning entrants receive $1,000 USD cloud computing credit and invitation to attend & present at a virtual ICCV workshop, with specific date TBD. Sponsor: Facebook, Inc., 1 Hacker Way, Menlo Park, CA 94025 USA.