Updates:

- October 2016: the updated BraTS 2016 Challenge proceedings are available.

- May 2016: The 2016 challenge website is online.

- New task for BRATS 2016: quantifying longitudinal changes.


Background and Previous Events

Because of their unpredictable appearance and shape, segmenting brain tumors from multi-modal imaging data is one of the most challenging tasks in medical image analysis. Although many different segmentation strategies have been proposed in the literature, it is hard to compare existing methods because the validation datasets that are used differ widely in terms of input data (structural MR contrasts; perfusion or diffusion data; ...), the type of lesion (primary or secondary tumors; solid or infiltratively growing), and the state of the disease (pre- or post-treatment).   

In order to gauge the current state-of-the-art in automated brain tumor segmentation and compare between different methods, we are organizing a Multimodal Brain Tumor Image Segmentation (BRATS) challenge in conjunction with the MICCAI 2016 conference and the MICCAI 2016 BrainLes Workshop. For this purpose, we are making available a large dataset of brain tumor MR scans in which the relevant tumor structures have been delineated. 

This challenge is in continuation of BRATS 2012 (Nice), BRATS 2013 (Nagoya), BRATS 2014 (Boston), BRATS 2015 (Munich).



FIGURE: Multimodal data. Examples from the BRATS training data, with tumor regions as inferred from the annotations of individual experts (blue lines) and consensus segmentation (magenta lines). Each row shows two cases of high-grade tumor (rows 1-4), low-grade tumor (rows 5-6), or synthetic cases (last row). Images vary between axial, sagittal, and transversal views, showing for each case: FLAIR with outlines of the whole tumor region (left) ; T2 with outlines of the core region (center); T1c with outlines of the active tumor region if present (right). (Figure from the BRATS TMI reference paper.)


Data, Tasks, and Challenge Format

Data and task. The training and testing data set comprises data from the BRATS 2012 and BRATS 2013 challenges, and data from the NIH Cancer Imaging Archive (TCIA) that were prepared as part of BRATS 2014 and BRATS 2015, and a fresh test set. All data sets have been aligned to the same anatomical template and interpolated to 1mm^3 voxel resolution. The data set contains about 300 high- and low- grade glioma cases. Each data set has T1 MRI, T1 contrast-enhanced MRI, T2 MRI, and T2 FLAIR MRI volumes. Annotations comprise the whole tumor, the tumor core (including cystic areas), and the Gd-enhanced tumor core and are described in the BRATS reference paper recently published in IEEE Transactions for Medical Imaging (also see figure below).

All test data sets have been segmented manually (by one to four rates). Training data sets originating from the BRATS2012 and BRATS2013 challenge have been segmented manually (by four raters). Training data from TCIA have been annotated by fusing results of segmentation algorithms that ranked high in the BRATS 2012 and BRATS 2013 challenge. Annotations were inspected visually and were approved by experienced raters.



FIGURE: Manual annotation through expert raters. Shown are image patches with the tumor structures that are annotated in the different modalities (top left) and the final labels for the whole dataset (right). The image patches show from left to right: the whole tumor visible in FLAIR (Fig. A), the tumor core visible in T2 (Fig. B), the enhancing tumor structures visible in T1c (blue), surrounding the cystic/necrotic components of the core (green) (Fig. C). The segmentations are combined to generate the final labels of the tumor structures (Fig. D): edema (yellow), non-enhancing solid core (red), necrotic/cystic core (green), enhancing core (blue). (Figure from the BRATS TMI reference paper.)



Evaluation. 
Both Dice scores and Hausdorff distances will be evaluated for "whole tumor", "tumor core" and "active tumor" using the VSD online evaluation system.  As the BRATS 2012 and BRATS 2013 test data is a subset of the BRATS 2015 test data, we will also calculate performance on the 2012/2013 set to allow a comparison against the performances reported in the BRATS reference paper.

FIGURE: State of the art methods from the previous BRATS benchmarks. Hausdorff scores for two tasks from the BRATS TMI paper. Boxplots show quartile ranges of the scores on the test datasets; whiskers and dots indicate outliers. Black squares indicate the mean score (for Dice also shown in the table of Fig. 7), which were used here to rank the methods. The Hausdorff distances are reported on a logarithmic scale. (Figure from the BRATS TMI reference paper.)


New task in 2016: Quantifying longitudinal changes. As part of the 2016 challenge we created a new test set comprising of two or more observations of the same patients. It will be disclosed to the participants which data sets originate from the same patient. Segmentations submitted by the participants will be evaluated against manual volumetric annotations as in the previous years (using Hausdorff and Dice scores), and against the experts' evaluation about 'progress', 'stable disease', or 'shrinkage'. As a new score that is close to the clinical diagnostic task, we will evaluate whether the volumetric segmentations provided by the participants are accurate enough to detect the changes indicated by the neuroradiologists. (Participants will not be required to provide these evaluations themselves.)  We will also evaluate the accuracies of the volumetric changes between any two time points.


Participation and Important Dates

Data availability. The co-registered, skull-stripped, and annotated training data set is available via the Virtual Skeleton Database (VSD). This new BRATS data set supersedes all previous BRATS data sets. The training data set for BRATS 2016 is identical to the one from BRATS 2015.

Short papers. Participants will have to evaluate their segmentation performance on the training data, and submit a short paper describing preliminary results as well as the segmentation method they are using by the end of July (2-4 LNCS pages; will be reviewed lightly by the organizers). Please send abstracts to Bjoern Menze via email with the subject line "MICCAI-BRATS submission". Participants who wish to submit a significant longer version to the MICCAI 2016 BrainLes Workshop - that is will be part of the same workshop event at MICCAI in Athens - can submit this longer manuscript instead. Short papers will be part of the workshop proceedings distributed by the MICCAI organizers. 

Evaluation. The independent set of test scans will be made available to each participating team in early to mid September. The teams will analyze the images using their local computing infrastructure and will have to submit their segmentation results 48h later to the VSD submission system.  

Brainlesion workshop. Results of the challenge will be reported as part of a joint event with the MICCAI 2016 BrainLes Workshop and the MICCAI 2016 Ischemic Stroke Segmentation Challenge.

Post-conference LNCS paper. Top ranking methods will be invited to submit papers to the LNCS proceedings of the BrainLes Workshop.   

Joint post-conference journal paper. In the weeks following the challenge the participating teams will be invited to contribute to a joint paper describing and summarizing the challenge outcome, which we will then submit to a high-impact journal in the field. This paper will summarize results of BRATS 2014, BRATS 2015, and BRATS 2016. We will aim at a more clinically oriented audience.


Organization

Bjoern Menze, TU Munchen  (main contact)
Mauricio Reyes, Bern University