Task 1
Can I use pre-trained model parameters?
Yes, as long as they are open-source and you will fine tune solely on the provided data (i.e. the 3 provided videos).
Note that, in case of victory, you will be required to submit your code together with the link to the open-source pre-trained model parameters. Furthermore, if at the code inspection an entry is found to be using the annotations from the remaining videos (at any stage of the training pipeline, including the open-source pre-trained model), the entry will be disqualified.Are pre-trained weights available for Task 1?
Yes, from the 10th of August weights of baseline models (i.e. which did not use the logical requirements during their training) are available for both tasks upon request here.Where can I find the test set videos?
From the 15th of August the test set videos can be downloaded from this link.My submission file is larger than 300MB, can I still submit it?
To submit larger files, it is recommended that the participants use EvalAI CLI. Here is an example:
evalai challenge 2081 phase 4 submit --file submission.json –large
Look at this website for more info.Can I use the remaining videos in the training dataset to evaluate on the validation set?
No, in this task you should work in the assumption that the annotations for the remaining videos do not exist. However, you can use those videos as unlabelled data to train your model in a semi-supervised fashion.
Task 2
Are pre-trained weights available for Task 2?
Yes, from the 10th of August weights of baseline models (i.e. which did not use the logical requirements during their training) are available for both tasks upon request here.Where can I find the test set videos?
From the 15th of August the test set videos can be downloaded from this link.My submission file is larger than 300MB, can I still submit it?
To submit larger files, it is recommended that the participants use EvalAI CLI. Here is an example:
evalai challenge 2081 phase 4 submit --file submission.json –large
Look at this website for more info.