Submission Instructions:
Each team should share a public GitHub repository link that contains the following
All runnable code for training and testing.
Augmented dataset (if used).
Trained model file.
Candidates should provide an inference Python file (or function) that takes two inputs: dataset path and model path. Candidates may use this inference code for their model testing at their end, using the validation data shared as mentioned in the next step. We will use this inference function for your model validation with new test data. Please make sure that your inference file is maintaining the above format and is in a runnable state.
Inference results on the validation dataset. Candidates should run the above inference code to generate the mAP score on the validation dataset. The code should also generate a CSV file containing class labels and bounding box coordinates for each validation set image. This CSV file format should be identical to the ground truth annotation file format.
A working note document with a maximum of four pages. This file will provide all details related to your data preprocessing, model architecture, training, validation, and post-processing. Please be careful while preparing this document, as we will mostly validate your work based on the alignment of your code and this document. It is not mandatory to follow any specific format to prepare this note, however, you may use the conference format for your submission.
Note: Any missing item from the above list may affect your position in the leaderboard.
Please submit your GitHub Repository using the below link
Evaluation Criteria:
The Leaderboard of this challenge is prepared after all the submissions are received. We will follow a multi-level approach for this leaderboard preparation.
The mAP score for the released validation dataset will be checked at the initial level of filtration. Candidates must ensure that they are generating the mAP score using only the inference Python file (which they will share).
We shall test your model using a held-out private test dataset and generate mAP score using the inference Python file and our own code. The better mAP score, the better your position in the leaderboard.
We shall check your training parameters, model parameters and complexity, code flow, and working note document to make our final call.
Note: Please check the guidelines before making your submission. Any missing item will affect your position.