4 ASL module

The ASL module performs the following post-processing steps. In contrast with the T1 and FLAIR modules, there can be multiple ASL sessions per subject (referred to as ASL_1, ASL_2, ASL_3, etc.)

1) Motion correction (MoCo) 002_realign_ASL.status

This post-processing step performs motion estimation, for sequences that have more than 1 frame. E.g. the GE 3D spiral readout provides a single perfusion-weighted image (PWI) only, some Siemens 3D GRASE readouts provide 2 frames (one control, one label). In this case, this "motion correction" is the same as a registration. With more control-label frames (such as with 2D sequences, or multiple 3D measurements), multiple images are rigid-body registered, which is referred to as motion correction. Since a NIFTI file can only store a single orientation matrix, the orientation matrices of all frames are now stored in ASL4D.mat. These slight new differences in orientation matrices between different frames represent the motion correction. All motion parameters are stored in a file motion_correction_NDV_subject_session.mat within \\analysis\dartel\MOTION_ASL for reference. MoCo is still open for improvement.

Zig zag regressor

The blood intensity difference by labeling creates a "zig-zag" pattern on the motion time course. The solution is to regress this control-label intensity differences (i.e. [1 0 1 0]) out from the motion registration, according to a previously published method 8.

This Figure shows an example of the separate head position courses for different dimensions (note: this is not comparable to the QC images saved by ExploreASL, but simply for illustration purpose), with the same subject without (left) and with (right) the control-label zig-zag regressor. In the red line, it is clear that there is a zig-zag pattern that is regressed out on the right image.

Motion spike removal

The idea for motion spike removal, is that - however small the mean motion - any motion spikes (e.g. motion > mean+2SD) may still affect the quality of the mean image negatively 9, 10. Now motion spikes are detected by a method introduced by Shirzadi et al. 11. After SPM12 motion correction is performed, all control-label pairs are ordered for motion. Then iteratively more pairs are added to these time-series where a one-sample t-test is being conducted to test for perfusion values significantly different from zero. Where Shirzadi et al. use the number of voxels where p<0.05 (which can be argued is still an arbitrary threshold, not accounting for multiple comparisons etc), instead ExploreASL takes the mean t-stat as a metric. In subjects with motion artifacts, there will be a point where the mean t-stat doesn't increase because of increasing sample size but decreases because of increasing variance induced by motion. This is called "threshold-free motion spike exclusion", or "data-driven motion spike exclusion". The mean t-stat for perfusion that is significantly different from 0 is similar to the calculation of voxel-wise temporal SNR.

Regularization of motion spike removal

It could still be that for whatever reason, the mean t-stat decreases at the end but that the actual image quality is not impacted significantly. Therefore, we introduced a "regularization parameter", which states by how much the mean t-stat should be reduced from the peak t-stat, in order to remove control-label pairs. E.g. if the sorted control-label pairs with the most motion decrease the mean t-stat by a very small amount, it may not be necessary to remove these control-label pairs. This parameter symbols.SpikeRemovalThreshold by default is set to 0.01. This essentially makes the motion spike removal more conservative, i.e. it will only remove the "bad apples".

The original ASL file, ASL4D.nii, is copied to despiked_ASL4D.nii when motion spikes are detected. The ASL4D.nii is left intact whereas the spike control-label pairs are removed from the despiked_ASL4D.nii. In contrast with BOLD-contrast fMRI, interruption of ASL time series by frame exclusion will not lead to artifacts related to incomplete T1-recovery. If no motion spikes are detected, the file ASL4D.nii will be used and no despiked_ASL4D.nii is created. MoCo results are saved for quality control in \\analysis\dartel\MOTION_ASL. The x-axis shows the control-label frames. A net displacement vector (NDV) is computed from the root mean square (RMS) of the three translations and rotations. This facilitates the assessment of head position or head motion with a single value rather than 6 values 12, 13.

First graph is the exclusion based on average t-stat optimization (y-axis) of the inclusion of control-label (perfusion-weighted) pairs sorted by motion (x-axis). The blue line shows the mean t-stat whereas the red line indicates the excluded control-label pairs. Second graph is the root mean square (RMS) of position (absolute position, first row), motion (relative position, or position difference between two consecutive frames, second row) and the exclusion matrix (third row). The third image shows the averaged pair-wise subtraction, for an increasing amount of control-label pairs, as sorted by motion. The upper right is the optimal image (with exclusion of pairs that decrease the mean t-stat) and the lower right is the image with inclusion of all control-label pairs. The difference between the image with and without exclusions is not always clearly visible, and in most cases it is expected that the exclusion of motion spikes helps group analyses but are not helpful for individual cases.

In the following example, there are some motion peaks visible.

This example shows that this method can also be used to remove artifacts, which may appear as "large motion". In this case, on the lower right on the third image, there is an Ingenia fat suppression artifact visible, which is removed by the motion spike exclusion (top right image on the third image).

The image \\analysis\dartel\MOTION_ASL\Overview_motion_pair-exclusion.jpg shows an overview of the motion exclusion for the total population, with on the x- and y-axis the mean motion of each subject and the percentage of excluded pairs respectively. In this example, most subjects had a mean motion of 0.1 mm, and as the mean motion increases, it is more likely that there are motion spikes that should be excluded.

0025_register_ASL 0025_register_ASL.status

In this step, the ASL image is registered to the T1 image.

Optimal registration choices

We compared different registration choices with ExploreASL, which has been described elsewhere 14. The main challenges with ASL registration are for 3D readouts the low effective resolution (because of large point spread function and sensitivity to motion leading into a bit more blurring/smoothing) in which case there may be too little structure/high frequencies in the image for registration, and for 2D readouts the geometric distortion. In short, the current best method for registration is registering the perfusion-weighted image to the pGM image (PWI-pGM), using a rigid-body transformation. A non-linear transformation can perform better, but this needs to be validated. The alternative for 2D readout images with many vascular artifacts, is to register the mean control or M0 image to the T1w image (Mo-T1w), in which case BET-masking helps. Currently, ExploreASL uses rigid-body PWI-pGM registration without masking.

PM: A promising new registration algorithm is boundary-based registration (BBR), which may perform even better but is currently not implemented in SPM; a better high-resolution CBF reference image can be created by including pWM, creating a gradient, to simulate WM CBF in the high resolution reference image

A temporary average perfusion-weighted map is created from the ASL4D.nii (or despiked_ASL4D.nii) time series, and background noise is removed, the resulting image is saved as \\analysis\SubjectName\ASL_1\temp_mean_PWI.nii . Then this temporary PWI is registered to the pGM, if this concerns the first session. The transformation will be applied to the orientation matrix in the NIFTI header or in .mat files, and will be applied as well to other sessions, and to M0 scans. The other sessions are not registered to the pGM, but only to the first session. This should make the registration between different sessions & T1 more consistent. In this case, any ASL-T1w misalignment will be similar for consecutive sessions, which is important for session comparisons.

003_reslice_ASL 003_reslice_ASL.status

This part reslices the ASL images, including any ancillary images (other than M0). All transformations are combined into a single interpolation. Reslicing into common/MNI/standard space is performed before quantification.

Slice gradient

A slice gradient is created as reference for quantification (for 2D readout only) as \\analysis\dartel\slice_gradient_SubjectName_ASL_1.nii . For 2D readouts, multiple slices are acquired consequently and the PLD increases for each slice. To account for this in the quantification in the upsampled MNI space, we need to know the slice positions in the same space. Therefore, this slice gradient image is transformed identically to the ASL images, which provides a standard space reference of slice positions. In the initial ASL space, these images will contain integers (i.e. slicenumbers 1-17). However, after motion correction and registration the subtle position differences between frames will smooth the integer slice numbers into a slice number gradient. Below is this process visualized (don't mind intensity changes). Left image shows slice gradient, right image CBF map for reference.

original space (e.g. slicenumbers 1-22)

the same after motion correction, rigid-body registration to MNI and resampling

Later (in the visualize part), a QC image is created for this slice gradient, which also allows to visually check the relation of the ASL image to its field of view (FoV), and compare the FoV between sessions and/or participants. This is an example of the slice gradient saved in \\analysis\dartel\SLICE_GRADIENT_CHECK\slice_gradient_SubjectName_ASL_1_reg.jpg, containing in standard space the coronal and sagittal PWI slices (first row), the slice gradients (hotter colors for increasing slice numbers, second row) and the ASL image projected over the slice gradients. Mostly between participants the sagittal angulation changes because of differences in positioning of the head in the coil, which can be seen when scrolling through these images. This could affect the SNR distribution of the ASL image because of coil inhomogeneities.

Mean_control

Subsequently, the ASL time series themselves are upsampled to the common space. Then a mean_control image is created as reference. If there is no background suppression employed, this image can be used as M0. This is beneficial since it has perfect registration with the ASL image by definition. The processing of the mean_control image as M0 image is the same as processing of a true M0 image, which is described below.

ASL filter

If it is requested (by the symbols.BILAT_FILTER parameter in DATA_PAR.m), the upsampled ASL time series will be filtered, which is useful to reduce the influence of artifacts such as the Ingenia fat suppression artifact. In short, intermediate time-series are created by filtering the original time-series with a spatial low-pass and temporal high-pass filter. From these intermediate time-series, outliers are defined and removed from the original time-series.

While this filter strategy may also be beneficial to remove undesired vascular signal and improve the quantification of ASL, this has not been formally validated yet. Therefore, we recommend as good practice to visually inspect the temporal SD images (as stored in \\analysis\dartel\SD_SNR_images) after a first run of ExploreASL (without the filter enabled), to select ASL scans that should be re-processed with this filter enabled. This will only repair the ASL scans that need repairing without touching the ASL scans without artifacts.

There is an global version of this filter (symbols.BILAT_FILTER==1), and a extracortical version (symbols.BILAT_FILTER==2). In the latter case, the temporal high frequency but spatially low frequency signal changes are acquired outside the GM (WM and CSF), and applied to the GM, to preserve the original GM signal while accounting for the image artifacts. The filter is applied on the native space intermediate ASL time-series that are used for ASL-T1w registration and ASL-ASL (different sessions) registration, as well as on the time-series in standard space used for the final CBF image.

This example shows a successful image repair. The mean CBF (left) & temporal SD (middle) maps of time-series show the artifact before (lower row) but the artifact is almost completely removed after (upper row) the bilateral filter. The filtered signal (right upper) shows the artifact. Note the smoothness of this image, since this filter looks for spatially smooth but temporally high frequent artifacts. The artifact shown here is atypically large, the artifact shown below is more typical.

Pair-wise subtraction

After the upsampled time series are used to create the mean_control image (multiple frames only) and slice gradient image (2D readout only), they are pair-wise subtracted and averaged. The mean image is saved as \\analysis\dartel\PWI_SubjectName_ASL_1.nii, the temporal SD and temporal SNR images as \\analysis\dartel\SD_SubjectName_ASL_1.nii and \\analysis\dartel\SNR_SubjectName_ASL_1.nii, with images (created in the visualize part below) saved as \\analysis\dartel\SD_SNR_images\SD_map_SubjectName_ASL_1.jpg and \\analysis\dartel\SD_SNR_images\SNR_map_SubjectName_ASL_1.jpg

Typical examples of temporal SD image in standard space, without (above) and with (below) typical fat suppression artifact. Note that the noise around the brain is only shown within the original FoV, outside the original FoV only Not a Number (NaNs) exist, which is shown in black here. The most variable regions, as shown here, are the eyes, the circle of Willis, the cerebral arteries (ACA, MCA and PCA) and the sagittal sinus. Slight misalignment from motion is visible as an edge of higher variance around the brain, especially at the front. Some spots around the head are variance from extracranial arteries on the skull. Minimal differences in physiological perfusion fluctuations between GM and WM are visible, they are masked by the noise.

Example of a temporal SD map from a child with sickle cell disease, which is a patient population with unusual high ASL SNR because of higher CBF and shorter ATT (due to anemia) and longer blood T1. The vasculature is much more complete and the difference in physiological perfusion variance between GM and WM is clearly visible. However, this difference in GM and WM could simply represent the GM and WM CBF ratio (a factor 2-5).

Automatic sign detection (Check_control_label_order.m)

Before averaging, the sign (control-label order) is automatically detected, which is important as the storing order of these frames is sometimes switched, dependent on scanner and software version (i.e. some are stored as control-label-control-label, others as label-control-label-control). The idea is very simple, it creates a perfusion-weighted image and looks whether it is positive or negative. The control image should have positive signal (blood and tissue have the same positive magnetization signal) whereas the label image should have negative signal for blood (blood magnetization has been inverted). Therefore, control-label should give a mean positive perfusion image. If it gives a mean negative perfusion image, the order was actually [label control label control] rather than [control label control label]. This image is masked by masking the mean control image at 50% of its maximal intensity, after removal of extreme values. Then the median perfusion within this mask is obtained rather than the mean, since this is was stable to artifacts. This has been tested in >1000 ASL images and it never failed.

0035_PreparePV 0035_PreparePV

This function smoothes tissue posteriors (pGM & pWM) in the ASL native space, but in the same high-resolution (1.5x1.5x1.5 mm) to be used as partial volume maps. This ensures that the same smoothness of the original ASL image is represented in standard space, which is needed for a valid partial volume error correction (PVEc). Here are results from two example 3D spiral subjects, which shows that the GM-WM CBF ratios approach the expected ratio of 2-5:

CBF results without PVEc:

[GM WM GM/WM ratio] for 2 subjectRows

61.6 31.8 1.94

55.0 33.3 1.65

CBF ROI PVEc results without smoothing posteriors in native space:

65.8 32.4 2.0

58.5 34.0 1.7

CBF ROI PVEc results with smoothing posteriors in native space (& afterwards transforming them to standard space where the ROI PVEc is performed):

84.4 27.3 3.1

76.8 28.8 2.7

First, this function upsamples the ASL image to the same space, but 1.5x1.5x.1.5 mm resolution (UpsampleNII.m). This also adds 50 lines in each dimension to the field of view (FoV), to avoid smoothing of low values near edges to transfer inside the pGM.

Secondly, it reslices pGM & pWM native images to this orientation, which will rotate them into the ASL space. This is because the ASL images can be rotated by default for some acquisitions. This is especially the case for 2D readouts, where the main focus is to get the cerebrum inside the FoV (e.g. for optimal background suppression). This can lead to an angulation as shown in the below example.

This shows an ASL image (left, rotated by MriCron), a pGM resampled into ASL-but-high-resolution space (middle), which is smoothed afterwards (right). Note that here some slices have been cut off the pGM because of the limited FoV of this 2D EPI ASL scan, this has now been avoided by the extension of the FoV with 50 lines in each dimension.

To smooth the pGM & pWM to the same smoothness as the ASL image, the ASL sequence smoothness is estimated, based on the native resolution of the ASL scan and effective point spread function (PSF) estimations, which include not only the acquisition PSF, but also motion sensitivity and smoothing in the reconstruction phase. After smoothing, these PV pGM & pWM maps are transformed into standard space, where they are stored as \\analysis\dartel\PV_pGM_SubjectName.nii & \\analysis\dartel\PV_pWM_SubjectName.nii

From the same example as above, this is in standard space the original pGM (left), the pGM that has been smoothed in native ASL space (middle) and the CBF image. It can be appreciated that the pGM has been smoothed "diagonally", because of the angulation of the ASL scan with respect to standard space. Note also the vascular artifacts on the ASL image, because of the flipped sign for regions with negative vascular signal. This is discussed in the quantification part. Although these may looks worse, these are helpful since they will introduce noise into a VBA or ROI analysis, rather than an extreme negative value.

Examples of the PV pGM maps in standard space for several sequences :

3D spiral, native voxel-size 1.9x1.9x3.8 mm, estimated effective resolution ~5x5x9.5 mm FWHM

3D GRASE, native space voxel-size 1.9x1.9x4 mm, estimated effective resolution ~3.6x3.6x6.9 mm FWHM

2D EPI example readout 1, native space voxels-size 3x3x7 mm, estimated effective resolution ~3.7x3.7x7.6 mm FWHM

2D EPI example readout 2, native space voxel-size 3.8x3.8x4 mm, estimated effective resolution ~4.6x4.6x4.3 mm FWHM

On later consideration, these estimations are open to debate, and the current estimations were set to assume that the 2D EPI smoothness should be the same as the acquisition resolution, and the PSF estimations for 3D GRASE and 3D spiral were taken from a previous paper 15.

004_realign_reslice_M0 004_realign_reslice_M0.status

This step is performed if there is a separate M0-scan, otherwise this step is skipped. This step effectively treats the M0 the same as the PWI (alignment, rigid-body registration to ASL PWI), except for smoothing all high frequencies away from the M0 image.

M0 quantification (QuantifyM0.m)

The M0 image is corrected for incomplete T1 relaxation by dividing it by (1-exp(-TR/T1_GM) 1. T1_GM is 1240 ms at 3T 16, so for a 4 seconds TR (as with a mean control image without background suppression) this would be a 4% increase in M0 signal. The M0 image is corrected for scaleslope and rescaleslope (if available, e.g. Philips scans).

M0 smoothing (ImageProcessM0.m)

The division of M0 can decrease SNR (add noise). This can be substantial 17. Therefore, it is important to smooth the M0 substantially (e.g. 12 mm FWHM Gaussian). In addition, this removes the necessity to have good registration of M0 to ASL, which can be difficult, especially in 3D scans with background suppression. Both 3D acquisition (large point spread function (PSF) and background suppression remove the between-tissue-contrast required for good registration. This may affect the GM-WM differentiation on the CBF map. As Helen Beaumont proposes, a single M0 number would be a good alternative. Smoothing the M0 map aggressively essentially leaves a single M0 number, plus a bias field that will correct any coil inhomogeneity. The partial volume error effect that this has is inverse to the PWI partial volume effect, so it cancels it out in some way.

Before smoothing, it is important to mask the M0 to remove the influence of the high liquid signal in the ventricles and the low extracranial signal. After several comparisons, the best way to achieve this appeared to create a brainmask from the pGM and pWM, and erode this. 4 erosions were optimal for an M0 image and 6 erosions for a mean control image, since the latter image is perfectly registered by definition. Then the M0 image is iteratively smoothed with small kernels, which is faster than a single large kernel smoothing. This smoothing also extrapolates outside the mask, which were the regions of the ventricles and outside the brain. In this way, there are no extracranial extremely high values, which are otherwise very difficult to separate from true perfusion and remove.

This figure shows how the ndnan smoothing function operates on a square of normally distributed noise, surrounded by NaNs (A). B = A smoothed with ignoring the NaNs; the square is smoothed to a single value, the NaNs are untouched and do not influence the square. C = extrapolation of B; the script takes the outer values from the square and extrapolates/smoothes these into the NaNs juxtaposed to the square, however, this does not change the square itself. D = combined operation of B and C, i.e. smoothing the square with ignoring NaNs, and at the same time extrapolating/smoothing the outside values from the square into the NaNs. E = D - C. Note some high intensities at the edge in (A) that are preserved when combining smoothing with extrapolation (D) but not when smoothing first and then extrapolating (B and C).

These example Figures show from left to right sagittal slices of the original M0 image, the eroded brainmask, the masked M0 and the processed (smoothed and extrapolated) M0. Note that, although we lose the GM/WM differentiation, we have a robust smooth biasfield that is SNR optimal, has no registration errors, has no high ventricular or low extracranial signal and will not lead to extremely high values outside the brain.

This Figure shows the same PWI image after division by a smooth M0 image without extrapolation (left) and a smooth M0 image with extrapolation (right). Note the high intensity rim on the back of the brain that is removed using the extrapolated smooth M0.

005_quantification 005_quantification.status

This step performs the quantification of the ASL scan according to the 2014 consensus paper 1. This single compartment model assumes that the signal has decayed with blood T1 only. Although we know that the two-compartment model is more in agreement with the Gold Standard H2O-PET 18, this is a good approximation.

1) Prepare M0 image: this part loads the M0 and divides it by labda;

2) Prepare PWI image: this part loads the PWI and corrects it for scaleslopes if necessary. It checks whether the echo time (TE) of the M0 and PWI scans are the same (they should be, otherwise the geometric distortion & quantification may differ)

3) Load slice gradient if 2D: this part loads the slice gradient if appropriate and extrapolates this image, which it converts into a PLD gradient, using the initial PLD and the PLDslicereadout parameter (the time it takes for the readout of a single slice). For a 3D readout (which performs the readout of a total 3D volume at a single timepoint, instead of multiple slices in consequent fashion, the PLD is the same across the brain.

4) CBF quantification: first this takes the assumed labeling efficiency values from simulations in literature for CASL 4 and PASL 3 (PCASL falls under CASL) . The labeling efficiency is reduced by the background suppression 2.

5) Division by M0 and scale slopes: this divides the PWI by the M0 image and the PWI scale slopes

6) Vendor-specific quantification correction: this applies the quantification corrections that some vendors apply. This is correct in most cases, but if it is off, it is mostly a factor of 10 or 100 off, depending on the ASL version.

The average_quantify_ASL.m script takes Blood_T1art.mat (e.g. 1700 ms), Hematocrit.mat (e.g. 0.43), M0.mat (e.g. 3.4*10^6), LabelingEfficiency.mat (e.g. 0.85), Init_PLD.mat (e.g. 1800 ms), SliceReadoutTime.mat (e.g. 40 ms).

These values override the parameters specified in data_par.m. You can decide whether you want to use patient-specific values or average per session or cohort to make these ancillary quantification parameters more robust. The effect of background suppression on labeling efficiency is applied in addition.

Space

Note that all the images that ExploreASL uses for the quantification were transformed into standard space already by the previous reslice parts, hence ExploreASL performs the quantification in standard space. There are several reasons for this choice (assuming that we want to perform group statistics in standard space). 1) Having all scans in standard space greatly facilitates the quality control 2) it saves one additional interpolation, since otherwise you would do your motion correction before quantification, and then go to standard space. Now all the transformations are combined into a single interpolation. 3) any interpolation artifacts can be averaged out by the pair-wise subtraction (in 4D time-series ASL data), which would not have been the case when a single quantified image is interpolated to standard space. We have compared quantification in native and standard space, and this was very reproducible, only the single interpolation provides less blurry images in standard space than the double interpolations.

7) This part discussed flipping of negative signal, while we will still detect negative signal, flipping may not be the best way to deal with it, exclusion or filtering out may be better

Single-subject example of many negative and positive vascular artifacts

8) Compression high macro-vascular signal

Extremely high values from macrovascular signal can greatly affect voxel-wise analysis (VBA) statistics, although this nuisance may be reduced by smoothing. The optimal solution would be to model what part of this signal is macrovascular, and to distribute this signal over the distal region where it was destined to end. However, this takes effort and optimization. For now, we simply attempt to find a robust way to deal with extremely high values.

This repair part is based on the distribution of voxels values. If there is a nice homogeneous distribution across the GM (e.g. in a smooth 3D image) then we do not need to repair high intensities. Therefore, removal of highest sorted intensities (e.g. > 99.9%) will unnecessarily remove signal in these cases. Basing it on SD is more robust. Since ASL PWI distributions in the GM are often skewed, ExploreASL uses the median and median absolute difference (MAD) as non-parametric alternatives to the mean and SD.

Instead of "hard clipping", which would set all values above a certain threshold to the same value and completely removes this variance, here we "compress" values (as happens in audio processing), which rescales values above a certain threshold rather than hard clipping them.

First we mask the PWI at >80% sorted intensities. Then we select all voxels which are higher than median + 3 MAD, on the PWI image and on the PWI/M0 (CBF) image. Large bias fields (e.g. from coil inhomogeneity) will lead to large values on PWI, and M0 division artifacts will lead to large values on the CBF image. Vascular artifacts will lead to large values for both. Hence, the joint voxels (that are common from these two masks) are used. The mean CBF value from this joint mask will be used as CBF threshold.

3 2D EPI examples (upper row) with low, mediocre and relatively large vascular artifacts (images that have the appearance of an angiography should be excluded from analysis), with the mask where > mean+ 3 MAD (lower row).

Instead of linear compression ‒ such as dividing all values above the vascular threshold ‒ non-linear compression performs better, since it works towards an asymptote. We can assume that vascular artifacts range up to 360 mL/100g/min for a mean CBF of 60 mL/100g/min 20. However, other image artifacts, such as resulting from division by M0, can be much higher. These should be limited by a non-linear compression, which would entail taking an exponent rather than dividing values larger than the vascular threshold. Assuming a mean CBF of 60 mL/100g/min, a vascular threshold of 110 mL/100g/min and that we want to bring back the maximal vascular intensities of 360 to 150 mL/100g/min, an exponent of ^0.675 empirically seems to work nicely, as simulated in the Figures below.

These figures show how the compression works for different vascular thresholds, with compression exponent ^0.675. Horizontal and vertical lines are the maximal vascular values that we wish to bring back to 150. This seems a nice slope. Finding the ideal slope would involve a lot of trial and error, or modeling, but for now this appears to be a good compromise for keeping some variance while reducing the effect of vascular hyperintensities. Mind you that we still smooth or average before statistics, so high values do not need to be completely suppressed. This treatment will occur after we have switched sign for subzero CBF clusters, since these clusters may have contained very large negative values that, when switched sign, will need to be compressed as well.

9) Save files

Here, the treated CBF image is saved as \\analysis\dartel\qCBF_SubjectName_ASL_1.nii

and the CBF image before the vascular artifact reduction (both negative and high positive values) is stored as

\\analysis\dartel\qCBF_untreated_SubjectName_ASL_1.nii

Note that the treated image should be used for CBF calculations, but if the analysis concerns the original variance (e.g. the spatial CoV), then the untreated image should be used.

10) Computation transit times

Here the FEAST calculation is performed if there are two sessions that are named 'non-crushed' (ASL_1) and 'crushed' (ASL_2). After these images are smoothed, they are divided and from the crushed/non-crushed ratio the TT maps are calculated and saved as:

\\analysis\dartel\TT_SubjectName.nii.

006_visualize 006_visualize.status

In this final step, several images are printed for visual quality control. \\analysis\dartel\ASL_CHECK contains the PWI images in grayscale and false color. \\analysis\dartel\M0_CHECK contains M0 images in grayscale and false colors. \\analysis\dartel\M0_REG_ASL contains images to check the registration between M0 and ASL. These have an upper row with M0, a second row with ASL and a third row with the M0-scan in grayscale with the WM and CSF signal from the ASL scan overlaid in red. The red of the WM should fit centrally in the brain, whereas the CSF should fittingly surround the brain. \\analysis\dartel\RAW_CONTROL_CHECK contains mean control images in grayscale and false color. \\analysis\dartel\RAW_EPI_CHECK contains overviews of the complete time series to quickly browse whether there are any obvious artifacts. \\analysis\dartel\SD_SNR_images contains SD and SNR images in grayscale and false color. \\analysis\dartel\SLICE_GRADIENT_CHECK contains PWI images, slice gradients and both combined in the lower row. \\analysis\dartel\T1_ASLREG contains ASL images in grayscale with the WM probability map (segmented from T1 in T1 module) overlaid, to check the registration between ASL and T1.