NIMH's AFNI tool to analyse brain images. Tutorials can be found on NIMH's site.
AFNI tools generate two types of files:
.head: header files that give information on how to read the BRIK files, orientation of image, etc., and
.brik: BRIK files, which contain the actual data
AFNI calls each piece of information related to a voxel a sub-brick. To view information in subbricks and to know the history of a file (how it was generated), run the AFNI tool 3dinfo on either the .head or .brik file.
By convention in AFNI, a .1d file is a one-dimensional file (that can contain many columns) in that values are associated only in the time dimension (not space and time). In these files, each line is a time point in TR units. If our TR (Time of Repetition or time between acquiring images/slices) is 2 seconds, then each line contains information that is 2 seconds later than the previous line.
A whole bunch of csh scripts (x.*) that pipe AFNI tools to analyze fMRI data.
x.doitall calls all the other scripts that will be used in preprocessing and analysing of the data:
#!/bin/csh
./scripts/x.unbund $1
./scripts/x.to3d $1
./scripts/x.timeshift $1
./scripts/x.3dreg $1
./scripts/x.concat $1
./scripts/x.edge $1
./scripts/x.deconvolve $1
## at this point coregister anatomical images to functional images
## and normalise anatomies to standard space (tailarach)
./scripts/x.nlfit $1
./scripts/x.%AUC $1
./scripts/x.refit
./scripts/x.warp+blur $1
./scripts/x.t-test $1
Basically, x.doitall
PREPROCESS the data in these steps [read about some issues in data preprocessing]:
x.unbund
calls AFNI's tool TLv_static to convert raw data to images that AFNI can read. Note, not all scanner raw data need to be converted.
files created: unbund.out
x.to3d
takes all the individual slices and combines them into a 4-dimensional (space and time) data set using AFNI's tool to3d. It basically creates a low resolution image with a time series for each voxel.
files created: <subjNum>.run<runNum>
x.timeshift
uses AFNI's tool 3dTshift to make it seem like all the images are acquired at the same time even though in reality they are acquired one after another. In other words, this step aligns slices to the same temporal origin.
files created: <subjNum>.tsrun<runNum>
x.3dreg
uses AFNI's tool 3dvolreg to try and align all slices to each other (within and across runs), thus correcting for slight head movements
files created: <subjNum>.regtsrun<runNum>
x.concat
uses AFNI's tool 3dTcat to concatenate all the runs of the same condition together (piling up all the aligned stacks on top of each other)
files created: <subjNum>.<condLabel>runs
x.edge
uses AFNI's tool ??? and sometimes uses an in-house tool from our New York collaborators called 3dedge to detect the edges of the brain (so we're not looking at "significant" activations outside of the brain!)
files created: <subjNum>.ed.<condLabel>runs, <subjNum>.edmask.<runNum>
coregistration and normalisation
Manually coregister in AFNI's GUI: "Define Datamode > Plugins > Nudge Dataset". We then normalize the anatomies to a standard space by using AFNI interactively again by first defining the markers ("Define markers") to align the anterior and posterior commissures and then defining the markers of the extremes of the brain in order to finally transform into Tailarach space.
Automated:
Strip skull from brain (AFNI's 3dIntracranial or 3dSkullStrip)
Automatic coregistration tools (AFNI's 3dAnatNudge)
ANALYZE the preprocessed data:
x.deconvolve
basically does a multiple linear regression, using AFNI's tool 3dDeconvolve. It tries to fit a haemodynamic-shaped response to each time point we expect a response because an event occurred. In theory, our analyses can just stop here, but we do a few more steps where we take into account that each subject may have a slightly different activation level or baseline???. It is a good habit to run 3dDeconvolve on an event-related design that you are thinking of running just to see whether your design is analyzable.
files created: <subjNum>.irf.<condLabel>, <subjNum>.<condLabel>.dec
x.nlfit
does a nonlinear regression using AFNI's multipurpose tool 3dcalc to fit ????.
files created: <subjNum>.gm.<condLabel>
x.%AUC
uses AFNI's general purpose tool 3dcalc to calculate an area under the haemodynamic curve that is adjusted to a baseline, sort of like an activation measure/estimate which we can associate with each voxel. Basically percentage of area under the curve = (area between baseline and peak of estimated response function / area of baseline) * 100.
The percentage area under the curve is analogous to the percentage of change score for block design experiments.
files created: <subjNum>.%AUC.<condLabel>
x.refit
uses AFNI's tool 3drefit to make sure that blood flow data aligns with anatomical data ???
x.warp+blur
uses AFNI's tools adwarp and 3dmerge to stretch our subjects' brains into a standard Talairach brain and to smooth across voxels to take into account differences from brain to brain (spatial smoothing).
files created: <subjNum>.%AUC.<condLabel>.blur
x.t-test
uses AFNI's tool 3dttest
files created: ttest.<condLabel>
#!/bin/csh -f
cd $1/rfiles
rm -rf exp*
# Obtain the number of Pfiles in the current dir
set nump = `ls -1 r* | wc -l`
#loop over the Pfiles
foreach a (`count -d 1 1 $nump`)
# Get the name of the Pfile
set pfi = `ls -1 r* | head -n ${a} | tail -1`
echo Making directory exp$a
mkdir exp$a
# So we know which Pfiles created this data
touch exp${a}/made_from_$pfi
echo Unbundling Pfile $pfi with TLv
TLv_static -scale 1.25 -asis -noX11 $pfi exp$a/exp$a >> unbund.out
end
#!/bin/csh -f
$1/rfiles
echo to3d-ing $1
foreach exp (2 3 4 5 6)
cd exp$exp
to3d -2swap -skip_outliers -epan -session ../.. -geomparent ../../${1}.run1+orig.HEAD -prefix ${1}.run$exp -time:tz 164 19 2000 alt+z "exp*.*"
cd ..
end
Line 4: echo prints whatever follows on the same line to the terminal
Lines 5-9: We want to create 4-dimensional data sets for the images acquired from all our runs. Thus, the script uses the foreach block (ending at end) to run the to3d command on all images in all runs (in this example we have 6 runs).
Line 5: exp is a variable that takes on a value of the next run number each time it goes through the foreach loop.
Line 6: If we are processing run number 4 (so that the variable exp has the value 4 (the $ says to grab the value of the variable whose name follows), then this line changes into a directory called "exp4"
Line 7: This is the real meat of the script. AFNI's to3d is a variable that takes on a value of the next run number each time it goes through the foreach loop.
-2swap swaps the order of the bytes (some machines save bytes in a different order). With functional data, you'll see if you need to do this swap.
-prefix specifies the convention with which to3d should names the file(s) it creates. So if we are processing subject 3109's run 3, then the output files will begin with "3109.run3"
-time:tz 164 19 2000 says to read the timepoint first then the slice??? (tz), that there are 164 timepoints, 19 slices, and the TR (time of repetition, or the frequency that an image slice is taken) is 2000msec or 2 seconds.
alt+z indicates that the slices were acquired in alternative order (slice 1 then 3 then 5 etc.., 2 then 4 then 6....) in the z-direction.
#!/bin/csh -f
cd $1
echo Timeshifting $1
if ( $1 == 3100 || $1 == 3109 || $1 == 3200 ) then
3dTshift -prefix ${1}.tsrun1 -tzero 0 ${1}.run1+orig.HEAD
3dTshift -prefix ${1}.tsrun2 -tzero 0 ${1}.run2+orig.HEAD
3dTshift -prefix ${1}.tsrun3 -tzero 0 ${1}.run3+orig.HEAD
else
3dTshift -prefix ${1}.tsrun4 -tzero 0 ${1}.run4+orig.HEAD
3dTshift -prefix ${1}.tsrun5 -tzero 0 ${1}.run5+orig.HEAD
3dTshift -prefix ${1}.tsrun6 -tzero 0 ${1}.run6+orig.HEAD
endif
Lines 5-13: if...else...endif says that if we are processing the data from subjects 3100, 3109 or 3200, then we want to run 3dTshift on a files with different names (runs 1, 2 and 3). All other subjects have file names reflecting the run number 4, 5 and 6.
-prefix ${1}.tsrun1 specifies to 3dTshift what it should call its output files. If we were processing subject 3100's data, then all files create by 3dTshift would begin with 3100.tsrun1
-tzero 0 says to align all slices to the zeroth timepoint.
${1}.run1+orig.HEAD is the input file (by convention, files with +orig are the original files as acquired, before any preprocessing or manipulation during analyses).
#!/bin/csh -f
cd $1
echo 3d Registration $1
# first spatially align all images within runs
foreach exp (1 2 3)
3dvolreg -Fourier -clipit -1Dfile ${1}run$exp -base 82 -prefix ${1}.regtsrun$exp ${1}.tsrun${exp}+orig.HEAD
end
rm ${1}.tsrun*
# then spatially align all images across runs
foreach exp (2 3)
3dvolreg -Fourier -clipit -1Dfile ${1}run${exp}b -base ${1}.regtsrun1+orig\[82\] -prefix ${1}.regtsrun${exp}b ${1}.regtsrun${exp}+orig.HEAD
end
rm ${1}.regtsrun[2-3]+orig*
Lines 7-9: foreach...end goes through all the runs (in this case we have runs 1, 2 and 3)and aligns all images within each of the runs. We usually take the middle image of the stack and use that as the reference or master image to align to.
-Fourier says to use Fourier interpolation
-clipit says to clip really large values (sometimes Fourier interpolation inflates end values??).
-1Dfile ${1}run$exp tells 3dvolreg to spit out a file that records how much it had to move the brain to align it at a particular time point. If we are processing data from subject 3109 and run 3, the file would be called "3109run3". This "1D file" has 6 columns corresponding to x-, y-, z-translations; pitch; roll; and yawl. Each line is a separate time point.
-base 82 is 82 in this case because we had 164 volumes, and 82 refers to the middle of this stack of volumes. Basically, you can calculate the value to put here by dividing the duration of the experiment by the TR*2.
Lines 14-16: foreach...end uses the middle image of the first run to align the images in the rest of the runs (so that all images in all runs are aligned to each other).
#!/bin/csh -f
scan/controls/$1
echo Concatenating $1
3dTcat -rlt+ -prefix ${1}.singruns ${1}.regtsrun1+orig.HEAD ${1}.regtsrun2b+orig.HEAD ${1}.regtsrun3b+orig.HEAD
rm -f *regtsrun*
-rlt+ says while concatenating runs within conditions, to remove linear trends from the data (due to whatever reason, whether because the scanner heats up or the subject's brain heats up..the reasons for this is not well understood). This is done by getting the best fitting line to upward/downward trends and saving the residuals, or the deviations from the line. The + reinstates the mean back into the data.
#!/bin/csh -f
scan/controls/$1
echo Edge detecting $1
3dedge -prefix ${1}.ed.singruns -mask ${1}.edmask.sing ${1}.singruns+orig.HEAD
-mask ${1}.edmask.sing says to create a file that marks voxels inside the brain with a 1 and 0 for voxels outside of the brain.
This step basically is data reduction for an event-related experimental design: we want to do multiple regression so our gazillions of time series with all its noise, is fitted to our model of how an active/responding voxel should look like (the haemodynamic response function or IRF).
Thus, we have to tell the data reduction step where we expect a response because of an interesting point in our experiment, which we specify in a file called canonicals. 3dDeconvolve does this by shifting the events defined in the canonicals by one time point and calculating an average (think of it as a scaling factor that optimally fits each data point to our ideal haemodynamic function as closely as possible). This shifting is done 7 times because we think that a haemodynamic function can be captured in 16 seconds (8 * our TR, which is usually 2seconds).
Each line in a canonical file is a time point so the number of volumes you have for a subject should match the number of lines in your canonicals (time zero at line 1). Indicate when nothing interesting at a particular time point occurred with a 0. Anything interesting should be marked with a nonzero number (it does not matter what value as long as you mark different events relative to each other; values in canonicals are additive so that an event represented by 2 imply that it has twice the amount of expected activation than another event represented by 1. It is a good idea to create the canonicals for your event-related design before you run your experiment so you can run 3dDeconvolve on it to see whether your experiment will be analyzable.
At this stage, we can also specify what time points in our data is too noisy and thus should be censored from analyses. Time points to be censored include (a) the edges of runs where concatenation occurred (we usually zero out the first four volumes of each run), and (b) subject head movements. Head movements can be identified by launching AFNI (afni > image, graph), choosing a random voxel and going through each time point in the time series graph while watching for the contrast in the graph. Huge jumps in contrasts in the graph as you move from time point to time point in the graph indicates a head movement. We usually zero out the time point at which the jump in contrast occurs as well as the time point before and after it.
#!/bin/csh -f
cd $1
echo Deconvolving singruns $1
3dDeconvolve -input ${1}.ed.singruns+orig.HEAD -polort 1 -rmsmin 1.0 -fdisp 1000 \
-censor ../canonicals/censor.1d -concat ../canonicals/concat.1d \
-num_stimts 2 \
-stim_file 1 ../canonicals/${1}.1d\[0\] \
-stim_label 1 STOPsing \
-stim_minlag 1 0 \
-stim_maxlag 1 7 \
-iresp 1 ${1}.irf.STOPsing \
-stim_file 2 ../canonicals/${1}.1d\[1\] \
-stim_label 2 EOCsing \
-stim_minlag 2 0 \
-stim_maxlag 2 7 \
-iresp 2 ${1}.irf.EOCsing \
-fout -rout \
-bucket ${1}.sing.dec
Lines 6-20: The data reduction is done with AFNI's 3dDeconvolve. The backslash \ allows you to continue the same command on a new line. That is, csh will interpret the next line following \ as a continuation of the same command rather than as a new command. BE SURE THAT there is no whitespace after \; otherwise, you will get errors.
-polort 1 specifies the degree of the polynomial (straight line, hyperbolic, etc.) used to fit the data. 1 indicates a line (so tries to fit a straight line to the data in order to remove any linear drift). This is not always optimal as drifts can also be better fitted by a curve thus requiring higher orders of the polynomial. Bob Cox uses the following rule of thumb to help him decide what order of polynomial to use to optimally remove signal drift:
My ad hoc rule is to have about 1 baseline parameter per 150 s of imaging time, with a minimum of polort=2. (I'm assuming a normal TR in the range 1-4 s.) For example, for 7.5 minutes = 450 s, a polort of 3 should be used (450/150 = 3). For 14 minutes = 840 s, you might try polort=6 (840/150 = 5.6) etc.
To see examples of the effect of using different polorts, see this AFNI forum post
-rmsmin 1.0 says that if the root mean square is less than 1, don't process the data.
-fdisp 1000 says that if the F statistic > 1000, spit out the voxel??
-censor ../canonicals/censor.1d specifies where to look for the censor canonical file.
-concat ../canonicals/concat.1d specifies where to look for the file specifying the time points at which runs are concatenated. For example, if you have 3 runs, you acquired 164 volume during each run, your concatenation canonicals should look like:
0
164
328
-censor ../canonicals/censor.1d specifies where to look for the censor canonical file (zero means censored).
-num_stimts 2 specifies the number of canonicals we have. Notice that the number of times you call -stim_file...-iresp should correspond to the value you specify here. In this example, we are looking at correct inhibition ("STOP") and error of commission ("EOC") for one of our conditions (labeled "sing").
-stim_file 1 ../canonicals/${1}.1d\[0\] specifies where to look for the canonicals for our first stimulus (1). The \[0\] indicates the first column of the canonical file called ${1}.1d (so if were processing subject 3109, the file would be called 3109.1d.
-stim_label 1 STOPsing lets us give the first stimulus (1) a label (STOPsing).
-stim_minlag 1 0 and -stim_maxlag 1 7 specify that for the first stimulus 1), we want the IRF to be estimated using 8 time points. If our TR is 2sec, then this basically says that we expect the duration of a haemodynamic response to be captured in 8*2=16 seconds.
If we were using 3dDeconvolve for a block design experiment, the maxlag should be zero: -stim_maxlag 1 0 (no shifting of event canonicals needed).
-iresp 1 ${1}.irf.STOPsing tells 3dDeconvolve to print out the estimated input response function into a file for our first stimulus (1). If we were processing subject 3109, this file would be called 3109.irf.STOPsing.
-stim_label 2...-iresp 2... does the same thing for our second stimulus (2, labeled EOCsing for error of commission for our "single" condition) as for our first stimulus).
-fout tells 3dDeconvolve to print out .....
-rout tells 3dDeconvolve to print out .....
-bucket ${1}.sing.dec tells 3dDeconvolve to print out all its calculations in a bucket file (a file with lots of data). If we were processing subject number 3109, this bucket file would be called 3109.sing.dec.
This step does a nonlinear regression to try and model (using a gamma-variate function) the response in each voxel. We do this step so we do not constrain all haemodynamic responses to one prespecified shape. In our script, we specify a range within which each parameter in our gamma-variate function can vary to take into the account the possibility that not all voxels in every person have the same IRF shape. The ranges are based on Marc Cohen's published data on what a gamma-variate function for a responding voxel should look like (???). Notice that the fitting is done twice because in this example we have two stimuli (correct inhibition and error of commission) which we want to look at.
#!/bin/csh -f
cd $1
echo NLfitting $1 STOPsing
3dNLfim \
-input ${1}.irf.STOPsing+orig \
-ignore 0 \
-noise Constant \
-signal GammaVar \
-nconstr 0 -1000.0 1000.0 \
-sconstr 0 0 1 \
-sconstr 1 -1000.0 1000.0 \
-sconstr 2 8 9 \
-sconstr 3 0.15 0.45 \
-nrand 1000 \
-nbest 10 \
-rmsmin 1.0 \
-fdisp 1000.0 \
-bucket 0 ${1}.gm.STOPsing
echo NLfitting $1 EOCsing
3dNLfim \
-input ${1}.irf.EOCsing+orig \
-ignore 0 \
-noise Constant \
-signal GammaVar \
-nconstr 0 -1000.0 1000.0 \
-sconstr 0 0 1 \
-sconstr 1 -1000.0 1000.0 \
-sconstr 2 8 9 \
-sconstr 3 0.15 0.45 \
-nrand 1000 \
-nbest 10 \
-rmsmin 1.0 \
-fdisp 1000.0 \
-bucket 0 ${1}.gm.EOCsing
cd ..
-ignore 0 says to ignore the first image (????)
-noise Constant says to model noise as a flat line. This is a reduced model (best fitting constant).
-signal GammaVar says to model with a gamma-variate function. Note, this does not include post-undershoot.
-nconstr 0 -1000.0 1000.0 says that the range of our noise (n in nconstr) or reduced model cannot be more than 1000.0 above nor 1000.0 below the reduced model (our flat line).
-sconstr 0 0 1 says that t0 in our gamma-variate (signal; (s in sconstr for signal) model should be 0-2 seconds (0 to 1 TR units).
-sconstr 1 -1000.0 1000.0 specifies that our scaling factor or k in our gamma-variate function should be between -1000 and 1000.
-sconstr 2 8 9 says that the rise (r) of the gamma variate function should be between 8 and 9.
-sconstr 3 0.15 0.45 says that the fall (b) our gamma-variate model should be between 0.15 and 0.45.
-nrand 1000 says to randomly generate 1000 gamma-function??? (OR DOES n stand for reduced model???) curves that fall into the parameters as constrained above.
-nbest 10 says to then take the 10 best fitting curves out of those 1000.
-rmsmin 1.0 says to stop fitting a curve if the root mean square is less than 1.0.
-fdsip 1000.0 says to stop fitting a curve if the variance is less than 1000.0???.
-bucket 0 ${1}.gm.EOCsing says to print the all encompassing data file or bucket WHAT IS ZERO???? into the specified file name.
To interactively change these parameters in afni: plugins > set # > set & keep???. Then graph > opt > double plot > overlay. Then graph > opt > Tran 1D > LSqFit. This will overlay a red line (with the parameters as defined) to the time series.
When doing deconvolution, we lose intercepts and slopes of any linear trends. That's why when we calculate the percentage of area under the response curve, we have to readjust for any linear trends in or time series.
#!/bin/csh -f
cd $1
echo Computing $1 percentAUC STOPsing
3dcalc -fscale \
-a2 ${1}.gm.STOPsing+orig \
-b8 ${1}.gm.STOPsing+orig \
-c0 ${1}.sing.dec+orig \
-d1 ${1}.sing.dec+orig \
-e2 ${1}.sing.dec+orig \
-f3 ${1}.sing.dec+orig \
-g4 ${1}.sing.dec+orig \
-h5 ${1}.sing.dec+orig \
-expr "((step(a)-step(-a))*b*100)/((((c+(82*d))+(e+(82*f))+(g+(82*h)))/3)*8)" \
-prefix ${1}.%AUC.STOPsing
3drefit -fim ${1}.%AUC.STOPsing+orig.HEAD
echo Computing $1 percentAUC EOCsing
3dcalc -fscale \
-a2 ${1}.gm.EOCsing+orig \
-b8 ${1}.gm.EOCsing+orig \
-c0 ${1}.sing.dec+orig \
-d1 ${1}.sing.dec+orig \
-e2 ${1}.sing.dec+orig \
-f3 ${1}.sing.dec+orig \
-g4 ${1}.sing.dec+orig \
-h5 ${1}.sing.dec+orig \
-expr "((step(a)-step(-a))*b*100)/((((c+(82*d))+(e+(82*f))+(g+(82*h)))/3)*8)" \
-prefix ${1}.%AUC.EOCsing
3drefit -fim ${1}.%AUC.EOCsing+orig.HEAD
cd ..
Here, we tell 3dcalc, to calculate an expression -expr and where to find the values for the variables in the expression (-a...-b...-c....).
-a2 ${1}.gm.STOPsing+orig says that the value for variable a in our expression (following -expr....) can be found in sub-brick number 2 of the 3dNLfim bucket file. If we do a 3dinfo on this bucket file, we will see that sub-brick 2 corresponds to k of our gamma function.
-b8 ${1}.gm.STOPsing+orig says that the value for variable b in our expression can be found in sub-brick number 8 of the 3dNLfim bucket file. If we do a 3dinfo on this bucket file, we will see that sub-brick 8 corresponds to the area under the curve of our gamma function.
-c0 ${1}.sing.dec+orig says that the value for variable c in our expression can be found in sub-brick number 0 of the 3dDeconvolve bucket file. A 3dinfo on this bucket file tells us that sub-brick 0 corresponds to intercept of the linear trend in the time series of run 1.
-d1 ${1}.sing.dec+orig says that the value for variable d in our expression can be found in sub-brick number 1 of the 3dDeconvolve bucket file. A 3dinfo on this bucket file tells us that sub-brick 1 corresponds to slope of the linear trend in the time series of run 1.
-e2 ${1}.sing.dec+orig -f3 ${1}.sing.dec+orig are the slopes and intercepts of the linear trend in the time series of run 2. Note that because in our example experiment here, we had 3 runs, we have another set of slope/intercept values.
-g4 ${1}.sing.dec+orig -h5 ${1}.sing.dec+orig are the slopes and intercepts of the linear trend in the time series of run 3.
-expr "((step(a)-step(-a))*b*100)/((((c+(82*d))+(e+(82*f))+(g+(82*h)))/3)*8)" . We basically are taking the area under the curve (b), finding out whether it is a positive or negative response ((step(a)-step(-a)), dividing that by the area under the baseline (which is adjusted for any linear trends in all the runs ((((c+(82*d))+(e+(82*f))+(g+(82*h)))/3)*8)). All of this is then multiplied by 100 to give us a percentage value. In detail:
(step(a)-step(-a) takes the scaling factor (k) of our fitted gamma function, and finds out whether it's positive or negative.
b was our already calculated area under the curve above the baseline.
intercept+(midpoint*) is the equation of the line that defines the linear trend for a run. In our example experiment here, we had 164 volumes, so the midpoint is total number of volumes in run / 2 = 82.
( ((c+(82*d))+(e+(82*f))+(g+(82*h)) )/3 calculates the mean y-intercept of the baseline (we had 3 runs in this example).
mean y-intercept of the baseline * 8 calculates the area under the baseline (height x length). The 8 as length was because we estimated the haemodynamic function in 8 points or 16 seconds long.
#!/bin/csh
cd 3100
3drefit -yorigin 121 *%AUC*HEAD
cd 3109
3drefit -yorigin 120 *%AUC*HEAD
cd 3199
#no refitting
The warped file resulting from this step is what we use for group analysis.
#!/bin/csh
cd $1
echo Warping and Blurring $1
adwarp -apar $1.anat+tlrc.HEAD -dpar $1.%AUC.STOPsing+orig.HEAD
3dmerge -1blur_rms 3.00 -prefix $1.%AUC.STOPsing.blur $1.%AUC.STOPsing+tlrc.HEAD
adwarp -apar $1.anat+tlrc.HEAD -dpar $1.%AUC.EOCsing+orig.HEAD
3dmerge -1blur_rms 3.00 -prefix $1.%AUC.EOCsing.blur $1.%AUC.EOCsing+tlrc.HEAD
gzip *BRIK
mv *%AUC.*blur* ../Group_results/.
cd ..
Do a t-test against zero across subjects.
#!/bin/csh
cd $1 3dttest -prefix ttest.STOP -base1 0 -set2 *.%AUC.STOPsing.blur+tlrc.HEAD
3dttest -prefix ttest.EOC -base1 0 -set2 *.%AUC.EOCsing.blur+tlrc.HEAD
gzip *BRIK