Evaluation

We will ask the participants to provide the following:

(1) A link to their GitHub repository containing all the codes to reproduce the proposed algorithm(s).

(2) Clear instructions to reproduce the computing environment (e.g., the conda environment file, preferably) and run their model in it. This should take the shape of a script that only requires us to change the path to the testing data to be executed. 

(3) A minimal example script that implements the proposed algorithm. It must accept as input a sinogram of size 360 x 256 x 256 (with a.shape() the same as the training data)  and output a volume of size 256 x 256 x 256.  


We will expect the full cooperation of the participants if we face any difficulties in running their codes.

The evaluation machines have an upper limit of 4 NVIDIA A100 GPUs, if for some reason the models require more hardware, we would require the participants to provide access to it. 



Each group can either submit one model for both low and normal doses or submit two different models for two different doses. The criterion used for deciding the winner would be the mean-squared error (MSE) against the ground truth (averaged over a number of selected test scans). We will invite the top three groups in the low-dose category and the top two groups in the normal-dose category to submit a two-page paper and present it at ICASSP-2024 (which will be in the ICASSP proceedings following peer review). 


We will try to avoid repeated conference presentation invites for the same method within the same group. Thus, if a single method (or very similar methods) is among the best three in the low-dose category and the best two in the normal-dose category simultaneously, we will invite the next best submission in the low-dose category, although the best performing method will still be accepted as the winner.