1 Structural module
When this part is selected and we press F9 or control-enter, the T1 module will run.
1) Rigid-body registration of T1 to MNI center 001_coreg_T12MNI.status
This part performs a simple rigid-body registration using the SPM12 co-register function to align T1w images with the Anterior Commisure-Posterior-Commisure (ACPC) line, which is the center of the MNI space, using an MNI prior. This rigid-body transformation is applied to the NIFTI orientation matrix only, without having to resample the image. The same transformation is applied to all other images of the same subject (e.g. FLAIR, WMH_SEGM, as well as ASL4D and M0 for all sessions). This first registration step puts all T1 images in a good rough starting point for the CAT12 segmentation and the SPM12 longitudinal registration.
For registration, it is generally best to iterate from low-quality, rough alignments to more precise alignments. When registering a T1w image to a template, the differences are too large for a rigid-body transformation to have effect using higher resolution (i.e. smaller sampling distance). Hence, registrations with a 4 mm sampling distance perform equally as repeating the registration with lower sampling distances, but the computation burden differs a lot. What has a positive effect, is to resample the T1w image to MNI after the initial registration to MNI (i.e. alignment with ACPC), which removes the neck. Again, 4 mm sampling distance suffices here for this second alignment to the ACPC line.
2) Register FLAIR->T1w 002_coreg_FLAIR2T1w
This part will rigid-body register FLAIR to T1w, in a similar fashion as described above. In //dartel/FLAIR_REGDIR/ rFLAIR_subject_reg.jpg, WM segmentation from the T1 is projected in red on the FLAIR scan, to check the registration between FLAIR and T1.
3) Resample FLAIR 003_resample_FLAIR2T1w
This resamples the FLAIR to the same space as the T1w image. If the WMH_SEGM.nii exists, it should be aligned with the FLAIR.nii.
4) Segment WM lesions 004_segment_FLAIR
This step segments the WM lesions., using the lesion segmentation toolbox (LST) [ref], with either the lesion prediction algorithm (LPA) or the lesion growing algorithm (LGA). Their relative performance is dependent on the image. In 2D, LGA seemed better at large lesions than at small lesions, whereas in 3D this was the opposite. If there no choice has been made, LGA is performed by default. LGA takes longer to process, but also includes the creation of a FLAIR bias field. If LPA is selected, a bias field is created from the T1w image using a fast SPM12 segmentation, and this bias field is applied to the T1w and to the FLAIR image. The WMH segmentation from this part will be used to correct the T1w image in the lesion filling part below, and is stored as WMH_SEGM.nii. If this NIfTI already existed (e.g. from a previous machine learning segmentation), the existing WMH_SEGM.nii will be used. The segmentations can be checked in //dartel\FLAIR_CheckDir/rFLAIR_SubjectName.jpg. In this population, LGA seemed more stable, but LPA was better able to segment large confluential lesions:
5) Lesion filling 005_LesionFilling
On a T1w image, the intensity of WMH is similar to GM. Therefore, WMH within the WM can be misclassified as GM. In this step, the lesions in the WM are removed, and corrected by estimating the intensities from the neighboring WM voxels. Lesion filling can be checked in //dartel/segm_corr_SubjectName.jpg. The left image (shown for example above) shows the original T1w image: note how the T1w contains many WM lesions, shown as gray patches within the WM. The middle image has the WMH segmentations overlaid in red, and the right image shows the same T1w after lesion filling. It can be clearly seen that the lesions (gray patches) are gone without any visible artifacts.
6) Get WMH volumes 006_Get_WMH_vol
This part will obtain the WMH volumes from the segmented lesion files, into /analysis/dartel/Tissue_Volume/WMH_LST_(LPA|LGA)_SubjectName.csv. LPA or LGA is the algorithm used for WMH segmentation (LPA is used by default). The example below shows the path of the subject, and the original filename (which is moved to /analysis/SubjectName/WMH_SEGM.nii if this did not already exist, or deleted if this existed already (if another algorithm was used to segment WMH lesions). Then it shows whether LPA or LGA was run (0 or 1), the total lesion volume (TLV) and the number of lesions.
7) Segment T1w 007_segment_T1w
This performs a segmentation of the T1 image into pGM (c1T1), pWM (c2T1), pCSF (c3T1) and skull, soft tissue and air (c4T1-6T1, these are not saved). pGM stands for gray matter probability map. The pGM, pWM and pCSF are used to estimate tissue volumes later in this module. pGM is used later for registration of the ASL PWI to pGM, and the pGM and pWM are used for partial volume correction and visual quality control (QC).
SPM12 3D T1 segmentation (works also on a T2 or FLAIR, even a 2D T2 with 5 mm slices has been tested) uses a combination of intensity-based segmentation and prior-based segmentation. The first performs the segmentation based on differences in T1 MR intensity differences between GM, WM and CSF tissue types, whereas the second performs a uniform non-linear registration of standard space tissue priors to the native (i.e. subject) space, to regularize, or steer, the intensity-based segmentation. In other words, the intensity-based segmentation is helped by the registration of the tissue priors into native space, which represent the likelihood of certain tissue types being at a certain place. This segmentation works very nice, and is only wrong in 1 out of 300 segmentations.
The uniform non-linear registration uses a 12 degrees of freedom (translations, rotations, sheering and scaling, all in XYZ) transformation that is non-linear, but unlike DARTEL it is not spatially variant, i.e. it is the same across the brain (uniform), it can be described with a few parameters only (parametric). This transformation is comparable to the one obtained with the old_normalize SPM function, and is saved as \\analysis\SubjectName\y_T1.nii. For each voxel, this deformation image gives the new coordinates. Similar to the segmentation, this registration nearly never fails. Therefore, if this transformation (which was used to get standard space tissue priors into native subject space) is inversely applied, it can be used to get native space images into standard space. This works very nicely, is already a nice between-subject registration, and provides a nice starting point for DARTEL, which can make the between-subject registration even better. Furthermore, this removes the requirement of calculating the "common space -> MNI space" warp later, which used to be done after DARTEL.
At iterating DARTEL and affine registrations of the template to MNI, the author noticed that repeating DARTEL was only able to improve the between-subject registration when DARTEL was interleaved with affine registrations of the mean template to MNI. This causes the mean pGM (template) to become larger, as most population segmentations are smaller than the MNI template. The author's hypothesis is that, by making the pGM larger by the affine registration to the MNI template, this also enlarges the spaces between sulci, providing DARTEL more "space to work with". In addition, this makes atrophied brains larger, making them more similar with non-atrophied brains.
CAT12 is VBM12, a new version of a well-known software package for voxel-based morphometry (VBM). This segmentation is built upon the SPM12 segmentation, and in addition: 1) accounts for spatial variability in the GM-WM intensities, 2) partial volume correction, 3) application of 3 edge-preserving noise filters, 4) addition of DARTEL registration based on an existing n=555 healthy controls MNI templates, 5) skull stripping to deal with blood-vessels & meninges, sulci modeling and an improvement clean up routine.
SPM12 (left) vs. CAT12 (right) segmentation. Note the line around the ventricles due to partial volume error, which is solved in CAT12.
Also in cases of atrophy, CAT12 (left) visibly outperforms SPM12 (right).
Comparison in common space, average of n=15, before DARTEL. CAT upper row, SPM lower row. Note the clear difference in definition. After DARTEL, the difference is smaller, but still there.
The thalamus and globus pallidus contain a mixture of GM and WM tissue, which gives these regions a T1 intensity that lies between the T1 intensity for GM and WM. For this reason, it is not correctly segmented by SPM. The central half of the thalamus is segmented as GM and the peripheral half as WM, and the globus pallidus is completely segmented as WM. For this reason, "enhanced priors" or templates have been recently introduced, that contain the correct shapes for these subcortical structures. The old templates had the same abovementioned flaws as the individual T1 images. This works nicely, as shown below. It should only be noted, that this still does not improve the intensities on the T1 images. Therefore, the segmentation shifts in these regions from intensity-based segmentation to template-based segmentation. Since this is the same for all subjects, the segmented images will look better in the subcortical regions and the volumetric results will be closer to the actual value, but the variance between subjects for the volumes of these regions can still not correctly be captured. This is viewed by the fact that these regions appear very similar, nearly identical, on all segmentations, which indicates that template-based segmentation took over from intensity-based segmentation in these subcortical regions. The effect of these new enhanced templates on registrations such as DARTEL is not known, but it is expected to be better. DARTEL will omit these regions since with the "enhanced templates" the thalamus and globus pallidus will look nearly identical between subjects, which is essentially the same as omitting these regions from registration. Because the CBF in these regions has an intensity between GM and WM CBF, PWI-pGM registration will work better this way.
Same as with the Longitudinal Registration (see above6), by default the biasfield regularization is turned off (resulting in more extensive biasfield modeling) for GE T1 images, since these scanners have a larger bore and hence a larger biasfield.
These Figures show examples of the old (left) and new (right) priors/templates 7
New segmentation by CAT12 (formerly VBM12) toolbox within SPM
CAT12 is a recent addition to ExploreASL. This toolbox is based on the SPM12 segmentation, but is extended by a more elaborate bias-field correction, denoising and local intensity-based segmentation. The latter provides the ability for GM-WM intensity differences to vary locally, whereas in SPM12 the GM-WM intensity differences are fixed wholebrain. In addition, it does a partial volume correction and WMH correction. Note the differences in the GM lining around the ventricles, which is a partial volume artifact (T1 intensity GM is in between CSF and WM). In a recent challenge of segmentation software packages, CAT12 reached the highest position for freely available segmentation algorithms that do not use training (supervised machine learning, which perform better). CAT12 = position 14, SPM position 26, FSL position 27, FreeSurfer 31
Segmentation comparison between SPM12 (left) and CAT12 (right)
Segmentation comparison between SPM12 (left) and CAT12 (right) in atrophy/motion
CAT12 also runs a quick version of DARTEL, using previously created templates in a n~500 dataset. This works very well, but makes the subcortical segmentation look more like the original SPM templates, so the use of the enhanced templates is not really useful for CAT12. See below that this works very nice, but also that an additional DARTEL is still helpful to provide a bit more detail. The affine and DARTEL deformations are combined in the SUBJECTDIR/y_T1.nii file, for backwards compatibility with SPM12 segmentation. Also the other files have the same names. There is a new file SUBJECTDIR/catreport_T1.pdf containing the CAT12 processing results. Furthermore, there are now ROOT/dartel/Tissue_volume/cat_T1_SubjectName.mat files containing the CAT12 processing results, including the volumes which are used in 003_tissue_volume instead of running the SPM tissue volume tool.
ROI results from several atlases including Hammers, NeuroMorphoMetrics, are stored in ROOT/dartel/Tissue_volume/catROI_T1_SubjectName.mat.
Average population (PreDiva, n=330) pGM images before (left) and after (right) additional DARTEL. All T1w’s were segmented using CAT12.
In some cases, the segmentation fails and CAT12 explains about too poor tissue contrast. This is mostly the case in anatomical deviations, such as large ventricles, that do not allow for good bias-field modeling. Empirically, the best solution in this case was to catch the error and automatically repeat the CAT12 segmentation, but only after contrast enhancement of the T1w image (T1w2^0.5), and allowing a less regularized bias-field.
8) Get tissue volumes 008_TissueVolume
This part uses the segmentation files to calculate the volume of GM, WM and CSF tissue and stores them in \\analysis\dartel\ Tissue_Volume. These will be summarized later in the QC module.
9) Reslice structural images -> common space 009_reslice2DARTEL
This part transforms all T1 images to common space, which is a starting point for DARTEL in case this will run. It uses function
NativeDeformations.m, which is a function that mathematically combines and concatenates all transformations using the SPM12 tool deformations, which is used for all resampling/reslicing parts in ExploreASL. This is to ensure that for all ExploreASL images, whether they are intermediate images that are used for further processing, such as DARTEL registration, or final images that are used for quantitative or qualitative analysis, that only a single interpolation is performed to transform and resample these images from their native space to the destined space, which is usually the common/standard/MNI space. For this single interpolation, we compared several interpolation options available in SPM12. Trilinear resulted in a bit more blurring whereas B-splines resulted in sharper, less blurred, images. Higher order B-splines resulted in overshoot edge effects. As an optimal compromise, we decided to use 2nd (the lowest) order B-splines for all interpolations of important data. Interpolations of data where a small degree of blurring is allowed, such as the slice number gradient or the M0 image, ExploreASL uses trilinear interpolation since is this faster. Files will be saved with r (resliced) prefix as //analysis/dartel/rT1_SubjectName.nii, //analysis/dartel/rc1T1_SubjectName.nii and //analysis/dartel/rc2T1_SubjectName.nii. The resliced T1s can be used e.g. as background for statistical parametrical maps.
For ExploreASL, all image outputs are stored in 1.5x1.5x1.5 mm resolution MNI space, which gives 121x145x121 voxels, according to the MNI field of view (FoV) of 181.5x217.5x181.5 mm.
This combination of all transformations into a single interpolation reduces the effect of interpolation errors. The 1.5 mm resolution is much higher than the original ASL resolution. Therefore, any residual interpolation errors will occur in a resolution that is higher than the original resolution. Furthermore, all subjects are in reality scanned in a different orientation, and all control-label pairs may also been scanned in a different orientation because of head motion. Therefore, even though the original acquisition resolution may have been much lower, using a high common space resolution benefits the analysis, since different orientations fit better in a higher resolution (less "smoothing"). When we do statistics after all image post-processing, we need to smooth or average our results, and optimally we should smooth our results back into our original acquisition resolution, or use partial volume correction to do this for us, accounting for differences in GM and WM perfusion. The benefit of this practice, is that all post-processing is performed in a nice high resolution, and in the end we can smooth back to lower resolution to reduce the influence of registration or interpolation artifacts.
10) Visual & automatic QC 010_visualize
Here *.jpeg images are created in /analysis/dartel/T1_CHECKDIR to be able to quickly scroll through and check the results, without having to open all individual NIFTI files. This should ideally be done before proceeding to the next module, but can also be done in a later stage, but then there is a chance of having to re-run part of ExploreASL. Any easy viewer will do, the author usually uses the default Windows 7 image viewer.
These show 12 slices of the resliced T1 file (rT1_subject.jpg) and of the segmentation (rT1_subject_reg.jpg). In the latter, segmentation is shown by the WM segmentation in red, projected on the T1 scan (grayscale). The T1 scan should be in center and should not show anatomical abnormalities. This can be easily checked by comparing different T1 scans. The WM segmentation in red should nicely "fit" the T1 scan. Example images are shown below:
999_ready.status denotes that this module has been completed for the respective subject.
As an automated registration QC, we compare individual images to a mean population image. This can be the mean of the currently processed population, or an existing template. This will calculate the squared difference between two images, without masking them. Then it takes the mean of those squared images as a QC parameter for each image, which is the Mean of Squares (MoS). For the population, the median MoS is a stable parameter, together with 2* mean absolute difference (MAD), this gives a threshold to identify registration outliers.