Track 1: Generic Event Boundary Detection Challenge

  • This track aims at encouraging our participants to advance SOTA GEBD system.

  • The competition is based on our Kinetics-GEBD test set only.

  • Top 3 winners will be mentioned at the workshop and formally recognized.

For more details, please refer to our Challenge White Paper. For any questions about codalab, please post in its forum.

GEBD Datasets & Annotation Overview

We repeat these cognitive experiments on the following mainstream CV datasets; with our novel annotation guideline which addresses the complexities of taxonomy-free event boundary annotation.

Kinetics-GEBD

    • Our Kinetics-GEBD Train Set contains 20K videos randomly selected from Kinetics-400 Train Set. Our Kinetics-GEBD Test Set contains another 20K videos randomly selected from Kinetics-400 Train Set. Kinetics-GEBD Val Set contains all 20K videos in Kinetics-400 Val Set.

    • The Kinetics-400 Dataset can be downloaded from here.

    • The Kinetics-GEBD annotations (Train Set/Val Set) can be downloaded from here.

    • Video list for Kinetics-GEBD Test Set can be found here.

    • Note that some of the videos in Kinetics-GEBD Train Set and Val Set are no longer available but all test videos are available as of Mar 2021.

Evaluation Protocol

  • We use Relative Distance (Rel.Dis) to determine the correctness of each prediction. Rel.Dis is the error between detected and ground-truth timestamps, divided by the length of the whole video. Given a fixed threshold for Rel.Dis, we can determine whether a detection is correct (i.e. <=threshold) or incorrect (i.e. >threshold), then compute precision, recall and F1 score on the whole dataset.

  • Note that for each video, we multiple raters who make annotations independently. We (1) compare our detection result with each rater’s annotation and (2) select GT for this video as the rater’s annotation which leads to the best F1 score among all raters.

  • The official metric used in this task is F1@5%, which is defined as the F1 score computed with threshold 5%.

  • We evaluate performance on Kinetics-GEBD Test Set in this competition, video list can be found here.

  • We have provided starter baseline code for Track 1 at Github

Baselines

Table: Performance on Kinetics-GEBD for various supervised and unsuperivsed GEBD methods.

Submission Format

To submit your results to the leaderboard, you must construct a submission zip file containing two files: submit_val.pkl, submit_test.pkl for validation data and test data, respectively. Use the following command to generate the submission file.

zip -r test_submit.zip submit_val.pkl, submit_test.pkl

The pickle format is composed of a dictionary containing keys with video names and values with boundary lists. For example,

{'6Tz5xfnFl4c': [5.9, 9.4], 'zJki61RMxcg': [0.1, 0.4, 0.6, 1.5, 2.7]}.

If you have a question about the submission format or if you are still having problems with your submission, please create a topic in the competition forum (rather than contact the organizers directly by e-mail) and we will answer it as soon as possible.

Submission Policy

There are two phases for the challenge:

  • Development phase (corresponding to the validation split) will be launched on April 5th. It is used to validate your method and tune the hyper-parameters. Submission to this phase will not be taken into account for the challenge. Submission limits: 100 submissions in total, no more than 2 per day.

  • Final phase (corresponding to the test split) will be launched on May 1st. The scores of the development phase will not be automatically copied over. Therefore, you must re-submit your solution to the final phase if you want to be considered in the final leaderboard. Submission limits: 10 submissions in total, no more than 1 per day.

Report Format


  • Use CVPR style (double column) in the form of 3-6 pages or NeurIPS style (single column) in the form of 6-10 pages inclusive of any references. Please explain clearly what data, supervision, pre-trained models you have used so that we can make sure your results are comparable to others.

  • Please include your github link in the report. The top 2 winners are required to release their codebases and final models so that other people can reproduce in the future. Please contact us if you have any questions.



Report Submission Portal


For report submission, please send an email to loveu.cvpr22@gmail.com.

  • Format of email subject: “YourName-Submission-LOVEU22-Track1”;

  • Attach your technical report and other relevant materials in the email;

  • Include your Codalab account (registered email) and username for our challenge in the email. Include meta info like team members, institution, etc.

For more details, please refer to our Challenge White Paper.


Timeline

  • April, 2022 (11:59PM Pacific Time): evaluation server open for the val set.

  • May 05, 2022 (11:59PM Pacific Time): evaluation server open for the test set, with leaderboard available.

  • Jun 01, 2022 (11:59PM Pacific Time): evaluation server close.

  • Jun 08, 2022 (11:59PM Pacific Time): report submission due.

Communication & QA