The 'Colorectal Cancer Tumor Grade Segmentation in Digital Histopathology Images: From Giga to Mini Challenge' has officially been completed. The joint paper by the challenge organizers and competitors has been accepted for publication and presentation at ICIP 2025. You can find the paper here. If you're able to attend the conference in Anchorage, we'd be delighted to have you join us as audience during our presentation. Details about our session are provided below.
Date & Time: Monday, September 15, 2025, 17:15 – 18:00
Location: Room 5 (GC1)
Colorectal cancer (CRC) is the third most prevalent cancer globally, affecting over 1.8 million new cases annually. It is also the second leading cause of cancer-related deaths worldwide in terms of death rates. By 2043, the number of CRC cases is projected to reach 3.2 million globally. Due to its complex pathophysiology, CRC has several subtypes that influence prognosis and treatment response. Colon cancer biopsies, obtained through colonoscopy or surgical excision, are routinely analyzed as part of histopathological evaluations. Distinguishing between benign and malignant tumors, as well as determining the tumor grade, are critical tasks for pathologists in their daily practice. Identifying the tumor grade is crucial, as it correlates strongly with patient prognosis, with poor differentiation linked to worse outcomes, and plays a key role in determining appropriate treatment options.
In this challenge, a dataset of 103 digital histopathology whole slide images (WSI) collected from 103 patients with varying magnifications levels will be used. WSIs were pixelwise annotated by expert pathologists into 5 classes: tumor grades 1 through 3, normal mucosa and others. The dataset contains large SVS files, and their downsized versions as well. We offer downscaled version and original SVS files as training data to the competitors.
Launch of the Challenge, 14 April 2025
Result Submission Deadline, 25 May 2025 1 June 2025
Paper Submission Deadline (optional), 28 May 2025 11 June 2025
The dataset is available at here
We will judge submissions based on their macro F-score obtained over five classes. We also report other metrics such as precision, recall and mIoU. Even though they will not be used in leaderboard, we believe participants can get useful insights via detailed reports.
Participants will be asked to submit a Docker file to reproduce their results. For a participation to be valid, we will check whether the result they reported is reproducible using the Docker file. We will also go over their code to make sure no ”inappropriate” machine learning practice was used. An example for ”inappropriate ML practice” is using test images during training.
Contact points: Alper Bahçekapılı, Duygu Arslan
Duygu Arslan
This challenge is open to all.
Competition data will be shared after filling a form and accepting the license agreement. Access to downscaled version will be granted rightaway, whereas full SVS versions will be released after signing a license agreement.
Labels of test images are hidden. To obtain test results, participant will upload their segmentation predictions on test images to the Codalab platform. We expect participant to register to the platform.
Participants can form a team(maximum of 4) or make submissions individually. However, we limit maximum number of submissions per day in order to prevent brute forcing high scores