Resources‎ > ‎

BIDS Links and Tips (Brain Imaging Data Structure)

Created Oct 13, 2017. Last updated Dec 23, 2018

Learn about this two-pronged project which describes (1) a standard file and directory naming structure for neuroimaging datasets,  and (2) containerized apps that take advantage of it. Finally, the goal of reproducible neuroimaging that takes advantage of cloud resources is in reach.  Below, in roughly alphabetical order, are descriptions of different tools I have actually tried and what I learned from doing so.

The Main Links

The neuroimaging data structure
The neuroimaging docker containers

BIDS compatible datasets:
Download most examples from here:

Relevant Papers

Gorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., et al. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3, 160044–9.

Gorgolewski, K. J., Alfaro-Almagro, F., Auer, T., Bellec, P., Capotă, M., Chakravarty, M. M., et al. (2017). BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLoS Computational Biology, 13(3), e1005209–16.

van Mourik, T., Snoek, L., Knapen, T., & Norris, D. (2017). Porcupine: a visual pipeline tool for neuroimaging analysis, 1–10.

Miscellaneous Links to Educational and other Related Materials

-Data for the official tutorial: Download ds005.tar  here:
-A very useful introduction to Docker for BIDS users:
-Video of docker-bids workshop by Chris Gorgolewski (links to his other videos on right):
-BIDS templates:
-A presentation on the BIDS data format and HeuDiConv for dicom conversion:
-Porcupine software for auto-generating your pipeline (see paper above):

How To

BIDS validator

Click the Choose File button. Choose a directory with sub-XXX folders beneath it (you must have more than one subject folder).
Choose Upload (it won't upload your data, just directory structure and naming information, don't worry).  
In seconds, a text file is made available to you describing the errors. See "Download error log..." at the bottom of the interface. To learn more, go here:

dcm2niix (convert dcm to nifti)

Download Chris Rorden's excellent tool for converting dicom to nifti.  This correctly handles slice timing for multiband and produces the bids format json sidecar files:
Note that the naming strategy is not necessarily bids compliant.  It might be worth giving bids compliant names to your data on the scanner.  That should facilitate everything.

If you want to perform slice time correcion on multiband fmri datasets, you should also look at


There are three tricks to running the freesurfer docker container.

1) You must request a license.txt file from the freesurfer website:

2) You must bind mount the license file on the outside to license.txt on the inside of the container (this is just like binding the input small_ds and outputs directories), e.g.:

>docker run -ti --rm -v /Volumes/Main/Working/BIDS_TESTING/license.txt:/license.txt:ro -v /Volumes/Main/Working/BIDS_TESTING/small_ds:/bids_dataset:ro -v /Volumes/Main/Working/BIDS_TESTING/outputs:/outputs bids/freesurfer /bids_dataset /outputs participant --license_file /license.txt

3) On the mac, Docker runs in a minimal virtual machine.  By default, this machine is allowed to access 2 GB of RAM.  recon-all requires more RAM (~4 GB or more) See instructions here for increasing the RAM on Docker for Mac:

Build a singularity container on a linux machine with Singularity installed (this command builds a singularity 2.4 container with a squashed filesystem--which just means it is a smaller img --simg-- and that it is not backward compatible with singularity 2.3):

>singularity build fs.simg docker://bids/freesurfer

If you don't have a linux machine with root permission, then use Cyverse.  At the shell prompt type


It'll ask for your password and install singularity.  See the Cyverse page for related information.

Running on linux in /data/Work/fs_sing:  

>singularity run --bind /data:/data ./fs.simg ${PWD}/ds4 ${PWD}/outputs participant --participant_label 01 --license_file ${PWD}/license.txt

Tell singularity to bind data and everything in it...this means it should subsequently be able to find anything under data.  Then we call the singularity simg with the input dataset path, the output path, the participants and the license file path.


(I'm still learning this one, so the notes are kind of unstructured here: 2018/12/01)

Heudiconv uses dcm2niix to do the conversion (and hence handles at least dcm and our Siemens IMA files). Heudiconv also produces other project files expected by the bids standard.  You run the tool in iterations:  
1) The first iteration creates the project files and the (which you can move and rename) for some data subset (e.g., the anatomicals). 
2) The second iteration runs to actually create the Nifti directory as a sibling of the Dicom directory.

The complexity comes between these two steps: You edit the to put files into correct directories and name them with ses if relevant. There are some helpful files and examples to make this easier.  I think that once you set up the conversion file for all of your images, then you can run it easily on subsequent subjects and sessions. Getting it set up correctly is the trick as it requires some level of comfort with python. is a nice tutorial  (note it assumes bash syntax, zsh failed to interpret parts of the docker command correctly--variables passed in {} in this command: 

>docker run --rm -it -v /Volumes/Main/Working/DockerHeudiconv:/base nipy/heudiconv:latest -d /base/Dicom/sub-{subject}/ses-{session}/SCANS/*/DICOM/*.dcm -o /base/Nifti -f convertall -s 01 -ss 001 -c none --overwrite

Tutorial video:


High Performance Computing Singularity Example: BET (brain extraction tool)

Although, the High Performance Computing Cluster (HPC) does not allow you to build a singularity container, it does allow you to load the singularity module and run a singularity file interactively.  However, for big jobs, you should write a bash script to load the singularity module and call the singularity command.  Here we use create a batch script for the example bids app (which does brain extraction using FSL). 



#PBS -q standard
#PBS -l select=1:ncpus=1:mem=1gb:pcmem=1gb

### Specify a name for the job
#PBS -N BETest

### Specify the group name
#PBS -W group_list=dkp

### Used if job requires partial node only
#PBS -l place=pack:shared

### CPUtime required in hhh:mm:ss.
### Leading 0's can be omitted e.g 48:0:0 sets 48 hours
#PBS -l cput=00:02:00

### Walltime is created by cputime divided by total cores.
### This field can be overwritten by a longer time
#PBS -l walltime=00:02:00

### Joins standard error and standard out
#PBS -j oe


# You must load the singularity module on the HPC before running it
module load singularity
cd ~/temp
# example command to run the singularity container for each participant
singularity run bet.img ds4 outputs participant
# example of command to run group stats after the participant level brain extraction is done
singularity run bet.img ds4 outputs group

To run the above, submit your bash script to the qsub system like this:


High Performance Computing Singularity example: Freesurfer

And here's a for the freesurfer singularity image (fs.simg).  We are running under home on the HPC which is easy.  We have increased the number of cpus, memory and expected cpu and walltime.  Note, however, experimentation suggests that freesurfer uses one cpu only most of the 4 cpus was a waste.  In addition, the HPC cpus are not fast and don't offer any speed improvement:


#PBS -q standard

#PBS -l select=1:ncpus=4:mem=24gb:pcmem=6gb

### Specify a name for the job

#PBS -N FSTest

### Specify the group name

#PBS -W group_list=dkp

### Used if job requires partial node only

#PBS -l place=pack:shared

### CPUtime required in hhh:mm:ss.

### Leading 0's can be omitted e.g 48:0:0 sets 48 hours

#PBS -l cput=64:00:00

### Walltime is created by cputime divided by total cores.

### This field can be overwritten by a longer time

#PBS -l walltime=16:00:00

### Joins standard error and standard out

#PBS -j oe


export WORK=~/fs_sing

module load singularity

singularity run ${WORK}/fs.simg ${WORK}/ds4 ${WORK}/outputs participant --license_file ${WORK}/license.txt --n_cpus 4

For fun, you can run the singularity module interactively:

First, see which modules are loaded:

>module list

Then load the singularity module:

>module load singularity


MRIQC is a quality assessment BIDS app.  You can read more about what it does and see how to use it on our HPC in Chidi Ugonna's useful talk.


12/23/2018: Undergoing a lot of changes. Ensure your T1w image is the simplest possible naming structure (see #1 below). 
Ensure your phasediff json file includes both EchoTime1 and EchoTime2.  For our Siemens scanner, this is not what dcm2niix creates, so we may have to manually alter the json file. 
You can generate output if you run in debug mode (see #2 below).  
Use outputs (not output) (see #5 below).

Presumably, some/all of these issues will be addressed as issues have been submitted to github on 11/28/2018)

1) It requires only the short name of the anatomical image (so sub-01_T1w.nii is okay, but the run exits prematurely if the image is named sub-01_acq-mprage_T1w.nii (I think I got this right)...the point is, there are other acceptable names for this file from the BIDS validator point of view, but MRtrix3_connectome can't handle these.

2) The output directory on the local machine is not populated unless you run in debug mode:

This command runs happily for 12 hours, filling up a tmp directory inside the container and never writing it out to the local disk on the outside of the container:

docker run -i -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 01 --parcellation desikan

If you run with debug, then the output gets written to the local disk outside the container.

docker run -i -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 01 --parcellation desikan --debug

docker run -i --rm -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /Volumes/Main/Working/DockerMRtrix/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 02 --parcellation desikan --debug

3) The phasediff image must have Echotime1 and Echotime2, but dcm2niix fails to do this, so it has to be handled manually.

4) I don't know if this is serious or not, but the docker run complains about a call to eddy_openmp --help (see bolding below):

dpat@Saci:/Volumes/Main/Working/DockerMRtrix% docker run -i --name mr3 -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 02 --parcellation desikan Note that this script makes use of commands / algorithms that have relevant articles for citation; INCLUDING FROM EXTERNAL SOFTWARE PACKAGES. Please consult the help page (-help option) for more information.
Command: bids-validator /bids_dataset Commencing execution for subject sub-02 N4BiasFieldCorrection and ROBEX found; will use for bias field correction and brain extraction Generated temporary directory: / Importing DWI data into temporary directory
Command: mrconvert /bids_dataset/sub-02/dwi/sub-02_dwi.nii.gz -fslgrad /bids_dataset/sub-02/dwi/sub-02_dwi.bvec /bids_dataset/sub-02/dwi/sub-02_dwi.bval -json_import /bids_dataset/sub-02/dwi/sub-02_dwi.json / Importing fmap data into temporary directory Importing T1 image into temporary directory
Command: mrconvert /bids_dataset/sub-02/anat/sub-02_T1w.nii.gz / Changing to temporary directory (/ Performing MP-PCA denoising of DWI data
Command: dwidenoise dwi1.mif dwi1_denoised.mif Performing Gibbs ringing removal for DWI data
Command: mrdegibbs dwi1_denoised.mif dwi1_denoised_degibbs.mif -nshifts 50 Performing various geometric corrections of DWIs
Command: eddy_openmp --help [WARNING] Command failed: eddy_openmp --help

Command: dwipreproc dwi1_denoised_degibbs.mif dwi_preprocessed.mif -rpe_header

5) The docker documentation is inconsistent (output or outputs?)

To run the script in participant level mode (for processing one subject only), use e.g.:

$ docker run -i --rm 
-v /Users/yourname/data:/bids_dataset 
-v /Users/yourname/outputs:/outputs 
/bids_dataset /outputs participant --participant_label 01 --parcellation desikan
Following processing of all participants, the script can be run in group analysis mode using e.g.:

$ docker run -i --rm 
-v /Users/yourname/data:/bids_dataset 
-v /Users/yourname/output:/output 
/bids_dataset /output group