Resources‎ > ‎

BIDS Links and Tips (Brain Imaging Data Structure)

Created Oct 13, 2017. Last updated April 25, 2018

Learn about this two pronged project which describes (1) a standard file and directory naming structure for neuroimaging datasets,  and (2) containerized apps that take advantage of it. Finally, the goal of reproducible neuroimaging that takes advantage of cloud resources is in reach.

The Main Links

The neuroimaging data structure http://bids.neuroimaging.io/
The neuroimaging docker containers http://bids-apps.neuroimaging.io/

Relevant Papers

Gorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., et al. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3, 160044–9. http://doi.org/10.1038/sdata.2016.44 https://www.nature.com/articles/sdata201644

Gorgolewski, K. J., Alfaro-Almagro, F., Auer, T., Bellec, P., Capotă, M., Chakravarty, M. M., et al. (2017). BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLoS Computational Biology, 13(3), e1005209–16. http://doi.org/10.1371/journal.pcbi.1005209 http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005209

van Mourik, T., Snoek, L., Knapen, T., & Norris, D. (2017). Porcupine: a visual pipeline tool for neuroimaging analysis, 1–10. http://doi.org/10.1101/187344 https://www.biorxiv.org/content/early/2017/10/11/187344

Miscellaneous Links to Educational and other Related Materials

-Data for the official tutorial: Download ds005.tar  here: https://drive.google.com/drive/folders/0B2JWN60ZLkgkMGlUY3B4MXZIZW8
-A very useful introduction to Docker for BIDS users: https://neurohackweek.github.io/docker-for-scientists/
-Video of docker-bids workshop by Chris Gorgolewski (links to his other videos on right): https://www.slideshare.net/chrisfilo1/docker-for-scientists
-BIDS templates: https://github.com/BIDS-Apps/dockerfile-templates
-A presentation on the BIDS data format and HeuDiConv for dicom conversion: http://nipy.org/workshops/2017-03-boston/lectures/bids-heudiconv/#1
-Porcupine software for auto-generating your pipeline (see paper above): https://timvanmourik.github.io/Porcupine/

How To

BIDS validator

In google chrome, go here: http://incf.github.io/bids-validator/
Click the Choose File button. Choose a directory with sub-XXX folders beneath it (you must have more than one subject folder).
Choose Upload (it won't upload your data, just directory structure and naming information, don't worry).  
In seconds, a text file is made available to you describing the errors. See "Download error log..." at the bottom of the interface. To learn more, go here: https://github.com/INCF/bids-validator



dcm2niix (convert dcm to nifti)

Download Chris Rorden's excellent tool for converting dicom to nifti.  This correctly handles slice timing for multiband and produces the bids format json sidecar files: https://github.com/rordenlab/dcm2niix/releases
Note that the naming strategy is not necessarily bids compliant.  It might be worth giving bids compliant names to your data on the scanner.  That should facilitate everything.

If you want to perform slice time correcion on multiband fmri datasets, you should also look at http://www.mccauslandcenter.sc.edu/crnl/tools/stc


Freesurfer

There are three tricks to running the freesurfer docker container.

1) You must request a license.txt file from the freesurfer website: https://surfer.nmr.mgh.harvard.edu/fswiki/License

2) You must bind mount the license file on the outside to license.txt on the inside of the container (this is just like binding the input small_ds and outputs directories), e.g.:


>docker run -ti --rm -v /Volumes/Main/Working/BIDS_TESTING/license.txt:/license.txt:ro -v /Volumes/Main/Working/BIDS_TESTING/small_ds:/bids_dataset:ro -v /Volumes/Main/Working/BIDS_TESTING/outputs:/outputs bids/freesurfer /bids_dataset /outputs participant --license_file /license.txt


3) On the mac (and probably Windows) Docker runs in a minimal virtual machine.  By default, this machine is allowed to access 2 GB of RAM.  recon-all requires more RAM (~4 GB or more) https://surfer.nmr.mgh.harvard.edu/fswiki/SystemRequirements. See instructions here for increasing the RAM on Docker for Mac: https://docs.docker.com/docker-for-mac/#advanced


Build a singularity container on a linux machine with Singularity installed (this command builds a singularity 2.4 container with a squashed filesystem--which just means it is a smaller img --simg-- and that it is not backward compatible with singularity 2.3):

>singularity build fs.simg docker://bids/freesurfer

If you don't have a linux machine with root permission, then use Cyverse.  At the shell prompt type

>ezs 

It'll ask for your password and install singularity.  See the Cyverse page for related information.


Running on linux in /data/Work/fs_sing:  

>singularity run --bind /data:/data ./fs.simg ${PWD}/ds4 ${PWD}/outputs participant --participant_label 01 --license_file ${PWD}/license.txt


Tell singularity to bind data and everything in it...this means it should subsequently be able to find anything under data.  Then we call the singularity simg with the input dataset path, the output path, the participants and the license file path.


HPC does allow you to load the singularity module and run interactively.  However, for big jobs, you should write a bash script to load the singularity module and call the singularity command.  Here is an example using the example bids app (for freesurfer, you'd want to assign more resources for memory etc.):

>cat runbet.sh

#!/bin/bash

#PBS -q standard
#PBS -l select=1:ncpus=1:mem=1gb:pcmem=1gb

### Specify a name for the job
#PBS -N BETest

### Specify the group name
#PBS -W group_list=dkp

### Used if job requires partial node only
#PBS -l place=pack:shared

### CPUtime required in hhh:mm:ss.
### Leading 0's can be omitted e.g 48:0:0 sets 48 hours
#PBS -l cput=00:02:00

### Walltime is created by cputime divided by total cores.
### This field can be overwritten by a longer time
#PBS -l walltime=00:02:00

### Joins standard error and standard out
#PBS -j oe

#PBS -M hickst@email.arizona.edu

module load singularity
cd ~/temp
singularity run bet.img ds4 outputs participant
singularity run bet.img ds4 outputs group


To run the above, submit your bash script to the qsub system:

>qsub runbet.sh

And here's a runFS.sh for the freesurfer singularity image (fs.simg).  We are running under home on the HPC which is easy.  We have increased the number of cpus, memory and expected cpu and walltime.  Note, however, experimentation suggests that freesurfer uses one cpu only most of the time...so 4 cpus was a waste.  In addition, the HPC cpus are not fast and don't offer any speed improvement:

#!/bin/bash


#PBS -q standard

#PBS -l select=1:ncpus=4:mem=24gb:pcmem=6gb


### Specify a name for the job

#PBS -N FSTest


### Specify the group name

#PBS -W group_list=dkp


### Used if job requires partial node only

#PBS -l place=pack:shared


### CPUtime required in hhh:mm:ss.

### Leading 0's can be omitted e.g 48:0:0 sets 48 hours

#PBS -l cput=64:00:00


### Walltime is created by cputime divided by total cores.

### This field can be overwritten by a longer time

#PBS -l walltime=16:00:00


### Joins standard error and standard out

#PBS -j oe


#PBS -M dkp@email.arizona.edu


export WORK=~/fs_sing


module load singularity


singularity run ${WORK}/fs.simg ${WORK}/ds4 ${WORK}/outputs participant --license_file ${WORK}/license.txt --n_cpus 4


For fun, you can run the singularity module interactively:

First, see which modules are loaded:

>module list

Then load the singularity module:

>module load singularity

Comments