# FAQ/miscellaneous

Parametric vs. non-parametric tests

A parametric test can be more sensitive than a non-parametric test, but this requires that its assumptions are valid. One of the assumptions is that the data are normally distributed, which you can verify with the Shapiro-Wilk test for normality, but better to first inspect the data visually which will show whether the data is Gaussian distributed or not, and whether there are large outliers.

A non-parametric test is always correct if you want to be on the safe side, but if your data is normally distributed you can be more sensitive using a parametric test. I would also not mix the two, for comparability, which some studies do.

Also, note that the Shapiro-Wilk test tries to falsify the H0 that the data is normally distributed, Hence with small sample sizes it can fail to do so, because it has not sufficient power to disprove H0. Therefore, with small sample sizes (e.g. n<16) it can be regarded good practice to use non-parametric statistics in any case.

Another solution to non-normally distributed data, is a log-transformation of the data. Some Ricean distributions (e.g. WM lesions or infarcts in a population) are better logarithmically transformed and parametrically tested.

FLAIR matrix scanned too wide and ill defined

This case had a too wide FLAIR matrix, approximately twice as wide as the T1w matrix. What worked is to reduce the matrix size in dimension by 50%, by removing each odd voxel.

Reorientation and manual ACPC alignment

Some artifacts or bias fields can get images caught into strange “local minima”, which is a wrong position but where any further moving of the images worsens the cost-function. In case this happens, it can be helpful to restore the original image reorientation and manually align the image to ACPC, before re-running ExploreASL. In this case the registration is still done automatically, but the initial alignment is much better, which avoids the algorithm to get caught into local minima.

The function RestoreOrientation.m restores the original orientation. Then run spm_jobman and select SPM->Util->Display Image. Select the image you want to manually realign, click origin, and this will show the current position where the NIfTI orientation matrix believes that the anterior commissure should be (AC). Then move the crosshair to the correct position of the AC, and click on set origin, following by Reorient where you should provide all the images to apply this manual AC realignment to, including the current image, but also all images that should remain in registration with this image. The example below shows the ICBM T1w template, where the center of the image − real world position [0 0 0] mm, voxel position [61 85 49] in the 1.5x1.5x1.5 mm resolution − is the AC.

Unix code for viewing QC images side-by-side

To view QC images, use the qiv function (try qiv –help). E.g. for jpg images: qiv *jpg. And for either jpg or png images: qiv *g.

Alternatively, you can use eog (Eye of Gnome): eog *g. This also shows an overview of images, allows more “browsing”/thumbnail function. But this can provide errors in case of a file that is not an image.

Something has gone wrong in one of the modules, do I have to repeat the whole pipeline again?

No, the separate modules are self-containing, they track their progress by '*.status' files in the //analysis/lock folder.

In unix/Linux default Matlab keyboard shortcuts may not work. In this case, choose the Windows Default Set as shortcut preferences.

Invisible DICOM structure

The DICOM structure of your files/folders could be invisible, i.e. the scan names etc are in the DICOM header fields rather than in the file or folder names. Your DICOMs may be anonimized as e.g. IM_0001 IM_0002 etc. Try running ConvertDicomFolderStructure_FewLayers.m or ConvertDicomFolderStructure.m which gets the names from the DICOM fields and renames the directories accordingly.

Philips software won't export ASL source images

Try:

• ASL source images ON
• automatic subtraction OFF

How should I use the Philips scale slopes?

The scale slope issue can be problematic, especially because of the different software versions, and solutions (recently Philips even offered the possibility to apply one of the scale slopes to the image, becoming difficult to disentangle whether this already happened...). The idea was traditionally to have a scale slope that allows to store the data in 12 bit, and a second slope appeared to make these units visible on their scanner platform. While Rorden's dcm2nii almost always reads the scale slope (the smallest of the two values, around ^-4), the rescale slope (largest value, ranges from 2-10) needs to be taken from the dicom header separately. Because both are stored in private fields, they can easily get anonimized by DICOM anonimization programs.

For the correct conversion, it is important to have the Philips DICOM dictionary "catalog", which tells which private field contains the scale slope. A good practice is to check these two scale slopes across the study population, since they can change from scanner to scanner or even with software updates, so any consistency change in their values could hint to a scanner or software switch (which could also have slightly changed ASL acquisition settings).

ExploreASL comes with Philips scale slope tools in //ExploreASL/dicomtools, and the function ExploreASL_import_data.m takes care of the conversion. It uses multiple dcm2nii versions (some work better for enhanced DICOMs, others for previous versions such as Philips PARREC), and it separately obtains the DICOM header scale slopes, and stores these in the ASL4D_parms.mat or M0_parms.mat. This will all improve soon with the conversion to BIDS!

Are the "covariant data" (e.g. age.mat, sex.mat, Site.mat, cohort.mat etc in the derivative data root directory) obligatory?

No, these are not obligatory. Age and sex can be important for blood T1 modeling, "Age" can be important to estimate the longitudinal deformations if the time durations differ between multiple visits (e.g. second visit 1 year after first, 3 visit 10 year after first, then the SPM longitudinal registration needs more "freedom" and the first two should be more similar). "Site" is important to model site effects.

Other than that, indeed I use the covariates, to make sure that the results.CSV export contains the correct population data, let's say if you have a clinical colleague that wants to check these results with his/hers to make sure nothing went wrong. Please note also that the recent European privacy regulations strongly advice against sharing privacy-sensitive data, best to study them, and e.g. use "age" rather than "birth date", and deface the T1w. Also note that these files will be converted to the BIDS standard (i.e. a participants.tsv file) in the near future.

How do I set up a "consensus/white paper" M0 sequence?

Copy the ASL sequence; Disable labeling; Disable background suppression; Turn "dynamic series" off (we don't need multiple averages); set TR to 2 s (a true M0 would be 12 s, but we can model what this would have been by correcting the TR=2 s M0 image for incomplete T1 recovery); set NSA to 5 (for a bit more SNR, not really necessary but it doesn't take much time); total scanning duration should now be 10 s (+2 s dummy time maybe, = 12 s)

NB: check that all other parameters (e.g. TE, FOV, resolution etc) are identical to the ASL sequence!

Why are there median and mean ROI CBF values reported as output?

Within regions of interest (ROIs), CBF is often not normally distributed. For this reason, the median CBF of a region could be better than the mean. NB: This concerns the median CBF per ROI, not the median for a total population. The partial volume correction (PVC), by default operates parametrically, i.e. it finds the mean CBF per region. Therefore, for historical reasons, the output is median CBF without PVC, and mean CBF with PVC.

In the future, it may be good to use a more robust PVC (which would be non-parametrically, and similar to the median CBF without PVC), or to use the mean CBF, without PVC. Indeed, when reporting both values (with and without PVC), it is best to use a similar approach (parametrical or non-parametrical) for both.

What does "Untreated" mean?

After quantification, the CBF maps are corrected for vascular outliers, historically this was called "vascular treatment". The CBF maps with the suffix "_untreated" were not corrected, the other CBF maps were corrected by identifying significantly negative CBF values and significantly extreme positive CBF outliers, this procedure is described in the manual of the ASL module.

In the future, we try to improve this using principle component analysis methods, this is still under development.

The origins of these vascular effects are probably both instrumental (i.e. due to the sequence) and physiological, we are still investigating these as well, for proper corrections.

What is the difference between different partial volume correction (PVC) methods?

Traditionally, PVC is performed as linear regression of GM WM components, aka the "Asllani method". In reality, masking the CBF values for sufficiently high GM content (from the T1w GM segmentation, i.e. pGM), e.g. for pGM>70%, already is a form of PVC, which is referred to as PVC==0 in ExploreASL. Another simple PVC is to do a single component regression, which is to simply normalize by GM content, i.e. within a certain ROI, calculating sum(CBF)/sum(pGM), therefore this is referred to as PVC==1 (the "1" stands for the single component). The "Asllani method" is a two-component regression, hence is referred to as PVC==2. Note that in this explanation, we assume that you are interested in GM CBF.

Would the halo artefact (the rim) be potentailly addressed by masking non-brain tissue/skull stripping, since most of these seem to originate on the borther between GM and CSF/skull?

Motion artifacts can cause blurring of CBF between GM and WM. But more importantly, when motion occurs between control and label images, the control-label subtraction is performed with a shift between these two images. Although this most clearly visible as

a rim around the brain, this artifact affects the whole brain, so removal of the rim wouldn't really help. The rim is especially visible in ASL sequences without background suppression, as background suppression diminishes the harmful effect of motion between control-label images.

How can I perform a basic visual QC?

The easiest visual QC is to classify ASL images into 1) CBF contrast, 2) vascular contrast, and 3) noise/artifact contrast. This should be done/decided visually, but a rough indication can be given by spatial CoV (around 0-0.5 for CBF contrast, around 0.5-1 for vascular contrast, above 1-1.5 for artifacts). A CBF analysis (looking for regional neurovascular coupling effects) should then be done with all images from (1), and a spatial CoV analysis (looking for vascular sufficiency changes) with all images from (1) & (2).

Where can I find basic Matlab programming tutorials?

There are many good and freely-available Matlab tutorials, e.g. you can try THIS.

Q 1) While checking for rCBF values in some regions, I have noticed that occasionally there are values for Bilateral ROIs e.g Inferior temporal gyrus, anterior division, from HO atlas for which there is no value for left and right hemispheres (please see the attached file). Any ideas why this is the case?

A Yes, this is correct. I made the algorithm flexible by simply taking any atlas with any ROIs, and splitting each ROI in left and ROI, and keeping the original ROI (==bilateral). So if the original ROI was on the right only, it will be split into left and right, but the left will be meaningless, and the right should be the same as bilateral (the non-splitter ROI). Does that make sense?

Also, if regions are too small to expect reasonable SNR, it will not calculate the ROI-average CBF and call it “NaN”.

Q 2) There seems to be some issues with parallelization. I followed your instructions (thanks for sharing that fantastic video), but there are some errors in the log files of each module (please see the attached file). These errors are from random subjects in a pool of 40 controls analysed in parallel.

A The population module cannot be paralleled, the errors there make sense. But a single thread should still continue to process them.

Note that ExploreASL by default outputs verbose messages, to keep the investigator involved in what processing is done.

Only the messages in red are really warning, others are to hint on what processing is done.

* It says it skips the FLAIR bias field correction -> this is used only with LST LGA, not with LPA or with predefined WMH_SEGM

* No lesion masks were found / used -> this is for cost-function masking, which we didn’t had (e.g. no tumor or stroke masks)

* At some point x.mat cannot be loaded -- I don't know what that is? -> This is the structure that is used to save the pipeline parameters and internal memory,

useful for provenance. If it cannot be loaded, it will recreate it (but this could mean that not all provenance of all scan types are combined in this subject-specific file).

Hence, this warning should be ignored at the beginning of processing of a subject (ie. the structural module), but should not be thrown after the structural module.

How should I correct the biasfield of the FLAIR?

FLAIR images can have huge biasfields, but biasfields can easily by modeled and removed before running an automatic WMH segmentation. However, this is problematic when a patient has a large WMH volume. In this case, the WMH affect the modeled biasfield, which will make the WMH less hyperintense on the FLAIR. This could be resolved by a multi-spectral segmentation - using the T1w hypointensities together with the FLAIR hyperintensities. Although this is available in LST, this has not been properly tested yet. Also, the T1w hypointensities are not exactly the same as FLAIR hyperintensities, and their aspect difference can even be informative in certain pathologies. Nevertheless, if you have a study in which participants or patients have a low WMH load, activating the additional FLAIR biasfield option in ExploreASL can help.

This Figure shows the LST automatic WMH segmentation without (left) and with (right) additional biasfield correction, activated as option in ExploreASL. Note that because of the strong biasfield, the FLAIR images on the left are very bright at the frontal cortex, which the algorithm incorrectly segments as hyperintense lesions. When this biasfield is removed, the image on the right shows that the WMH are properly segmented.