Final Submission Notice(updated on July 18th):
We are now entering the final model submission phase for our challenge. Participants are required to submit their inference models in the form of a Docker container. The submission window will open on July 18th and close on August 1st. Only the last submission will be evaluated if a participant submits multiple times.
After August 1st, we will evaluate the submitted Docker containers on our withhold test set. Please make sure the segmentation mask files are saved as the outputs from your Docker (the format and file structure please follow our example validation Docker). We will report the final results based on manually reassessing those mask files rather than the automatically reported values in the docker.
For all teams who have submitted by August 1st, we will provide the performance results to each individual team via email (only the performance from that team). These results will be confidential and will not be shared with other teams until September 15th.
Please form the team as you want with at least 1 member. However, each person can only join one team. The violation might lead to an invalid submission. all team members should have registered accounts in our "KPIs24 Participants" group, separated by commas.
The final results and the leaderboard of the challenge will be announced on September 15th.
Please use the Submission Portals for two tasks. You can submit your results to task 1 or 2 or both.
Submission Guideline:
Algorithms will be accepted as Docker containers according to the technical requirements of https://grand-challenge.org. In the testing phase, since the test dataset cannot be released, participants are required to provide their algorithms in the form of a Docker container. This container should be submitted to our designated Docker repository. Access to this repository will be granted following approval. This approach is chosen to ensure a more comprehensive evaluation of the algorithm's reproducibility in different environments, thereby confirming its robustness and reliability in practical applications.
Each team is limited to 1 TOTAL submission during the Submission Phase. Invalid submissions will not count towards your quota. We will take your latest submission as the final one to run against the unseen testing data. The requirements of the Docker containers are as follows:
The submitted docker shall have bind mounts for the input test image folder and output mask folder.
Example command:
docker run --rm -v /path/to/your/input_dir:/input -v /path/to/your/output_dir:/output --gpus all -it your_docker_image
The input data structure shall follow the data structure as the validation docker structure input for both task1 and task2 (See instructions below).
For the segmentation, Task 1 segmentation must be JPG or PNG and use the .jpg or .png file extension, Task 2 segmentation must be TIFF image and use the .tiff file extension. The output mask should be saved in binary format, such as {0,1} or {0,255}
The segmentation mask for Task 2 should be at 40X digital magnification. The segmentation of WSI with 80X digital magnification should reduce the resolution by a factor of 2. The segmentation of WSI with 40X digital magnification should kept the same, e.g. (Edit on July 11th).
Segmentations' filename must keep the original filename and additionally end with an underscore and 'mask'. 'filename_mask.jpg', 'filename_mask.tiff', e.g..
In the submission Google form, please use your Synapse email as proof of registration. All participants must form teams (even if the team is composed of a single participant), and each participant can only be a member of a single team.
The form also requires an alternative contact email address for further communication.
Evaluation platform:
The submitted docker containers will be evaluated on a Ubuntu 20.04 desktop. Detailed information is listed as follows:
CPU: Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz x 52 threads
GPU: NVIDIA RTX A6000 (Available memory 64 Gb)
RAM: 64Gb
Driver Version: 535.183.01
CUDA Version: 12.2
Docker version 27.0.1
Evaluation metrics:
Dice Similarity Coefficient (DSC) for Task 1. Patch-level glomeruli segmentation.
Dice Similarity Coefficient (DSC) and F1 score for Task 2. WSI-level glomeruli segmentation.
F1 score for Task 2. WSI-level glomeruli detection.
We will rank all teams based on https://www.nature.com/articles/s41598-021-82017-6