Mostly culled from the SPM list/forum (SPM archive here) and lot from John Ashburner (thank you, John!! and the SPM team, and thank you, active users of the SPM community!)
file_nii = 'con_0001.nii'; % or beta image for beta values
coords_xyz = [0, 0, 0];
Vhdr = spm_vol(file_nii); % get header info for volume
%% V = 3-D array or the values stored in your img
%% xyz = a 3 x N matrix of the MNI coords of all the N vx in image
[V, xyz] = spm_read_vols(Vhdr);
XYZ(:,1) = coord_xyz;
vals = spm_get_data(V,XYZ); % your values, e.g. betas
SPM creates the following files after classical 1st-level model estimation (subject models; for more details, type help spm_spm at the MATLAB command prompt):
beta_NNNN.nii regression coefficients (numbering corresponding to order regressor was entered in the model)
con_NNNN.nii weighted sum of parameter images if T-contrasts were defined
ess_NNNN.nii weighted sum of parameter images if F-contrasts were defined
spmT/spmF*.nii statistical image (t-/F-statistics for each voxel)
For the t-statistic, it is a Student's t-distribution under the null hypothesis that the contrast is 0 formed by dividing the contrast image by the sqrt of the variance images (ResMS.nii) and scaled. Voxels outside the analysis volume are coded as NaN.
ResMS.nii variance of the error
RPV.nii estimated resels per voxel
In SPM's 2-step ("mean summary statistic") approach for random-effects analysis, the first level (GLMs on subjects) handles within-subject variance, and the contrast outputs on average contain information of the within-subject variance (by the fact that estimates vary each time you do them) which are carried through to the second level models which handles between-subject variance.
Take the contrast images con*.nii as inputs to 2nd-level models to test effects, not the statistic images spmT*.nii. Performing tests on the statistic images, you are not testing the effect but the reliability of detection of an effect across subjects.
Although the intra-subject variance is not explicitly taken to 2nd-level models, the contrast estimates effectively embodies this variability such that the two-level mean-summary statistic method used in SPM is mathematically equivalent to doing an RFX hierarchical linear model (says Will Penny)
The t-statistic image is the estimated signal image divided by the estimated standard deviation. Any noise in the variance image is propagated to the t-statistic image. When the degrees of freedom are low, pooled variance estimates (as used in SPM) can give more reliable variance estimates compared to using voxel variance estimates at least for PET data (Hunton et al. 1996; Strother et al. 1997).
SnPM implements a non-parametric approach to statistical inference which pools the varaince estimates locally, effectively smoothing the variance image and is an alternative to the pooled variance when it is invalid.
With the spmT image, you can interrogate the T-statistic associated with a given voxel coordinate with the following code:
matrix_dim = [64 64 36];
matrix_vol = matrix_dim(1) * matrix_dim(2) * matrix_dim(3);
spmT_fn = 'spmT_0002.img';
fid = fopen(spmT_fn,'r');
T = fread(fid,[matrix_vol 1],'float');
fclose(fid);
T = reshape(T, matrix_dim);
T(10,20,45) % get T-stat the voxel coordinates (10,20,45)
I have no idea....but you can make SPM GUI display the right-click context menu that it shows during "Check reg" (that allows you to display intensities under the cursor, filenames and other handy visualization tools) by typing this in the MATLAB console. The change takes effect immediately (no need to relaunch "Display" -- just right-click on your image in the Display window):
spm_orthviews('addcontext');
load('SPM.mat')
try
SPM.xVol.S; %% search volume (in vx)
catch
%% print some msg
end
You are at the mercy of the SPM messages printed to the MATLAB terminal to start finding out where to debug. Otherwise, you can review your batch structure, e.g. if it is called matlabbatch:
spm_jobman('interactive', matlabbatch);
Launch the GUI in the MATLAB window with SPM toolbox in the path:
slover('basic_ui')
%% no need to relaunch slover -- to show different slices and update in already open window:
so.slices = [-2:10:52];
so = paint(so);
This is not so odd because first-level models reflect the individual subject's variance at that voxel, whereas the variance at a voxel in second-level analysis is the variance of the betas across all participants.
Having 2nd-level models where you model each thing you're interested separately is more flexible (you can test many things without having to create separate 1st-level models), but doing so uses more degrees of freedom, which you have a lot more to use up in 1st-level analyses where it's based on the number of scans (which you usually have a lot of).
Blah blah. but note that you can also have parametric modulators that are categorical data.