5 Population module

After all image processing of individual images, the population will process images on group basis, where group statistics can infer on the processing of individual images (e.g. site-specific scaling, creation VBA mask, ROI statistics etc).

CreatePopulationTemplates.m This function creates several visual templates/atlas images for the studied population: mean, between-subject SD, CoV (SD/mean), Maximal Intensity Projection (MIP) and between-subject SNR (mean/SD). These images can be useful to identify scanner- or population-specific image features and/or data quality that are too noisy/variable on the individual level.

CreateBiasfield.m

This function provides a pragmatic solution to the unavoidable quantification differences between sites, scanners and/or sequences (here further referred to as site). Note that even with a software update that gives a slight difference in echo time (TE), quantification differences may be introduced. Therefore, it is a good practice to check for equality of ASL parameters such as TE and TR in \\analysis\dartel\PARAMETERS\ (as described elsewhere). This biasfield method removes all variability that is similar within site but different between sites, and hence this assumes that there are no physiological differences between sites, so it should be checked that there the distribution of physiologically interesting parameters is equal across sites. Because we use mean images, the quality of this correction is dependent on the number of subjects per site, so as a rule of thumb there should be a minimum of 10 images per site to perform this correction without influence of physiology.

When you provide ExploreASL with a Site.mat in the ROOT analysis directory, it will use these sites to create mean site CBF maps and smooth these with 6.4 mm FWHM constrained within an MNI brainmask, these are referred to as "bias-fields". This is a compromise between a very smooth biasfield (e.g. T1 segmentation biasfields are usually much smoother) and doing a voxel-wise correction, since we assume a certain spatial autocorrelation between quantification differences between sites, but also acknowledge that there may be susceptibility artifacts that differ between sites. Then this function creates an average bias-field for the total study, which is extrapolated to avoid division effects at the edge of the images.

It is important to check that this biasfield-correction doesn’t interfere with the PVC. This can be avoided by making sure the CBF images from all scanners have similar effective spatial resolution (~similar PSF), which is referred to as called smoothness equalization [ref].

Also, this function normalizes the mean GM CBF to 60 mL/100g/min. After this, the smooth mean CBF differences between scanners have been removed.

Additionally, it can be important to remove differences in spatial CoV between scanners. This is recently added, by subtracting the scanner mean CBF, obtaining the average spatial SD per scanner, and the ratio between this spatial SD and the average scanner spatial SD can then be divided out, after which the scanner mean CBF is added again.

Average scanner images for Siemens 3D GRaSE (left) & Philips 2D EPI (right)

Intensity bias fields for these scanners to equalize the mean CBF

LoadStatsData.m This function loads \\analysis\*.mat files to use them as covariants if they are in the correct dataformat. This is the same function as is called for the initialization of covariants for ExploreASL, as discussed above, but is simply repeated to load any newly generated covariants, that need to be used as covariants or iterated along in the statistical part.

CreateFileReport.m generates a file report of all available images, and the missing images. This overview can be useful to check first whether all imported and NIfTI-converted data is complete, and secondly whether all processing has been completed.

CompareImagesWithMean.m compares individual images with study templates, either created for the processed study or existing templates. This atlas-based comparison is based on a RMS, and is experimental. Will be evaluated soon.

check_values.m This function stores ASL and M0 quantification parameters in \\analysis\dartel\PARAMETERS\Modality_quantification_parameters.csv, as well as an outlier overview in \\analysis\dartel\PARAMETERS\Modality_outliers_parameter.csv. Outliers are defined as parameter values that lie outside mean ± 3 SD. Differences in scale slopes or other parameters could indicate that there was a scanner switch or software update. TE differences are important, because these will result in quantification differences that are not accounted for automatically by ExploreASL, as well as geometric distortion differences. The best is to use the CreateBiasfield function discussed above to account for the quantification differences.

SummarizeAcquisitionTime.m This function summarizes all times at which ASL scans were acquired - if at least 95% of the population has this value - and saves this in \\analysis\AcquisitionTime.mat to serve as a covariate and a histogram overview is printed in \\analysis\dartel\STATS\AcquisitionTime.jpg. It could be important to check whether this timing correlates with the parameter under research, e.g. if group differences are studied, then we would not want the patients to be scanned in evenings and controls in the mornings, because of the potential diurnal CBF rhythm 25.

Example of scan time histogram from a 22q11 ASL study (n=27). Most participants were scanned in the early morning, followed by late morning and a small subset in the late afternoon. The technicians apparently had a coffee break between 14h00 and 15h00.

ANALYZE_volume_statistics.m This function summarizes the tissue volumes as stored in \\analysis\TISSUE_VOLUME\Tissue_volume_SubjectName.csv and will summarize them in \\analysis\dartel\STATS\volume\volume.csv, containing segmentation volumes of GM, WM, CSF, all in Liter. It also stores \\analysis\GM_ICVRatio.mat, which contains the ratio of GM/ICV (intracranial volume), as well as \\analysis\GMWM_ICVRatio.mat, containing the ratio (GM+WM)/ICV. ICV is calculated as GM+WM+CSF. Hence the first parameter indicates GM atrophy only whereas the second parameter contains whole brain parenchymal atrophy.

ANALYZE_motion_statistics.m This function summarizes motion statistics, which it acquires from \\analysis\dartel\MOTION_ASL\motion_correction_NDV_SubjectName_SessionName.mat where all motion parametes for individual ASL scans are stored. It will generate a summary Figure in \\analysis\dartel\MOTION_ASL\Overview_motion_pair-exclusion.jpg, which shows the distribution of mean motion and percentage excluded control-label pairs across the population. Then it summarizes (& compares mean motion between covariates/sets if appropriate) the median, mean and MAD position and motion, the percentage exclusion and the maximum tValue (for motion-based frame exclusion), which are stored in \\analysis\dartel\STATS\motion\motion_Meas_.csv

Overlap_T1_ASL.m creates an image with the mean population ASL image (yellow) and the mean population pGM image (red) projected over this, stored as \\analysis\dartel\check_overlap_T1_ASL\check_overlap_T1_ASL_POST.jpg. The example below shows that still a misalignment exists due to the 2D EPI geometric distortion in this case.

Summarize_estimated_resolution.m This function summarizes the estimated spatial resolutions from all ASL images.

PowerCalculationStats.m This function checks for ASL time-series and splits them into two halves, to compute the intra-scan within-subject coefficient of variation (wsCV) and from this reproducibility parameters computes the minimally statistically detectable effect size based on the study sample size. Also, it computes this based on the between-subject coefficient of variation (bsCV).

The memory mapping in Load4DMemMapping.m and CreateMasks.m have now been removed, and data and atlas loading happens in Get_ROI_Statistics.m. Several standard atlases (MNI structural [ref], Harvard-Oxford [ref], Hammers [ref]) are stored in //ExploreASL/Maps/Atlases, and require a .nii or .nii.gz atlas image, and a csv-file.

Here, first, atlases are loaded and dealt with by xASL_AtlasForStats.m, which converts atlases to the xASL MNI space (1.5x1.5x1.5 mm3). The atlases can be either a 3D image with each ROI stored as integers (e.g. 1 2 3 4), or a 4D stack of images where each ROI is a binary map or probabilistic map. The probabilistic map will be robustly converted to integers, thresholding at the 50% percentile of the ROI (RobustMap2Mask.m). Atlases can be input manually, or the total GM and deep WM atlases will be loaded by default.

The csv files should contain the names of the regions, you can check the existing atlases for examples. The csv-file should have the same name as the .nii file of the atlas. If this file doesn’t exist, regions will be named ROI_1, ROI_2 etc. All ROIs will be calculated bilateral (left and right averaged, “B”), as well as unilateral (left “L” and right “R” separately), with these B, L and R suffixed to the ROI names.

CreateROIvalues.m then converts all atlases and data it loads into single columns, defined by a whole-brain mask (symbols.WBmask, xASL_IM2Column.m and xASL_Column2IM.m). It automatically checks the number of sessions to load for the data type that was requested (by string S.InputDataStr).

CreatePVEcROI.m expands the ROIs in the atlas centrally (towards the WM) as much as required to reach an included pWM volume that is at least 0.4 * the included pGM volume. It uses pGM & pWM MNI templates that have been smoothed into 4x4x4mm3, which is a common effective spatial resolution of the average ASL sequence.

All data are then analyzed with PVC=0 (no partial volume correction), PVC=1 (single-compartment PVC, using pGM information only) and PVC=2 (the full, dual-compartment PVC, including pGM & pWM information). For PVC=0, the ROIs are multiplied with individual thresholded tissue segmentations (GM or WM), which is effectively the simplest way of partial volume correction. The individual pGM segmentation is masked at pGM>50%, which has a lot of partial volume, but the threshold needs to be low, because at the effective spatial resolution of ASL very few voxels remain. The individual pWM segmentation is masked at pWM>80% and threefold eroded to avoid signal contamination from the GM [ref Mutsaerts]. The median value per ROI is acquired rather than the mean, because empirically CBF is often non-normally distributed spatially within a ROI within a single participant. For the spatial CoV, the calculation is still parametric (SD/mean).

For PVC==2 a matrix inversion is used that takes all the CBF, pGM & pWM values within the ROI and solves the partial volume equation CBF = pGM*CBFgm + pWM*CBFwm + pCSF*CBFcsf where CBFcsf is assumed to be 0. For PVC of the spatial CoV, a pseudo-spatial CoV was calculated on a pseudo-CBF image (pGM+0.3*pWM), and the spatial CoV was divided by this pseudo-spatial CoV. The pseudo-spatial CoV lies around 0.24, and will differ with atrophy.

If S.SubjectWiseVisualization==1, *.jpg images will be created with all ROIs overlaying the input data (e.g. CBF) images. This takes a lot of time though.

When ROIs or lesions are provided with the native space subject images (either registered with FLAIR or T1), a collection of subject-wise masks will automatically be created in //analysis/dartel/AtlasesStudySpecific/*.dat. These can be selected when running Get_ROI_statistics.m, to create statistics for these regions. ROIs can be anything, e.g. a significant region from an fMRI analysis for which the perfusion should be analyzed. Lesions are the same as ROIs, except that they will be used for cost function masking in the segmentation and registration (see structural module). This is important in the case of significant structurally deforming lesions such as tumors, infarcts etc. For each of the provided ROIs or lesion-masks, the following regions will be created (in this case the example is a manually annotated cortical microinfarct):

from left to right: 1) intralesional, 2) perilesional, 3) 1&2, 4) contralateral (3), 5) extralesional ipsilateral hemisphere, 6) contralateral hemisphere.

The recent xASL_CreateOutput_PDF.m investigates which data were tracked in xASL.mat (symbols copy) to print in this PDF, as a single subject QC. QC parameter values and QC images that are outputted in the modules/image processing steps, are stored in the symbols table, which is copied in the xASL.mat (to become xASL.json with BIDS). The parameter values are stored as symbols.Output.[Category](nSub).key = value, e.g. symbols.Output.Structural(1).ID = 'Sub-001'; in the above example. Likewise, the QC images are stored as symbols.Output_im(nSub).[Category]{nIM}, e.g. symbols.Output_im(1).Structural{1} = [121,145,121];

xASL_Collect_QC_info.m collects software info, and all other previously stored information, this is a temporary quick/dirty solution, to bundle all stored parameters, which are stored as population overviews (e.g. CSV files).

This also runs Create_QC_parameters.m, which creates the parameters mentioned above under "QC_diff_template", also by comparing the ASL images individually with a CBF template.

To run this part for older pipeline runs, remove the visualization*.status and 999*.status files, by which a rerun will capture the images required for the PDF creation.