Resources‎ > ‎

BIDS Links and Tips (Brain Imaging Data Structure)

Created Oct 13, 2017. Last updated June 3, 2019
(This page is mostly deprecated)
How To



dcm2niix (convert dcm to nifti)

Download Chris Rorden's excellent tool for converting dicom to nifti.  This correctly handles slice timing for multiband and produces the bids format json sidecar files: https://github.com/rordenlab/dcm2niix/releases
Note that the naming strategy is not necessarily bids compliant.  It might be worth giving bids compliant names to your data on the scanner.  That should facilitate everything.

If you want to perform slice time correction on multiband fmri datasets, you should also look at http://www.mccauslandcenter.sc.edu/crnl/tools/stc


Freesurfer

There are three tricks to running the freesurfer docker container.

1) You must request a license.txt file from the freesurfer website: https://surfer.nmr.mgh.harvard.edu/fswiki/License

2) You must bind mount the license file on the outside to license.txt on the inside of the container (this is just like binding the input small_ds and outputs directories), e.g.:


>docker run -ti --rm -v /Volumes/Main/Working/BIDS_TESTING/license.txt:/license.txt:ro -v /Volumes/Main/Working/BIDS_TESTING/small_ds:/bids_dataset:ro -v /Volumes/Main/Working/BIDS_TESTING/outputs:/outputs bids/freesurfer /bids_dataset /outputs participant --license_file /license.txt


3) On the mac, Docker runs in a minimal virtual machine.  By default, this machine is allowed to access 2 GB of RAM.  recon-all requires more RAM (~4 GB or more) https://surfer.nmr.mgh.harvard.edu/fswiki/SystemRequirements. See instructions here for increasing the RAM on Docker for Mac: https://docs.docker.com/docker-for-mac/#advanced


Build a singularity container on a linux machine with Singularity installed (this command builds a singularity 2.4 container with a squashed filesystem--which just means it is a smaller img --simg-- and that it is not backward compatible with singularity 2.3):

>singularity build fs.simg docker://bids/freesurfer

If you don't have a linux machine with root permission, then use Cyverse.  At the shell prompt type

>ezs 

It'll ask for your password and install singularity.  See the Cyverse page for related information.


Running on linux in /data/Work/fs_sing:  

>singularity run --bind /data:/data ./fs.simg ${PWD}/ds4 ${PWD}/outputs participant --participant_label 01 --license_file ${PWD}/license.txt


Tell singularity to bind data and everything in it...this means it should subsequently be able to find anything under data.  Then we call the singularity simg with the input dataset path, the output path, the participants and the license file path.



High Performance Computing Singularity Example: BET (brain extraction tool)

Although, the High Performance Computing Cluster (HPC) does not allow you to build a singularity container, it does allow you to load the singularity module and run a singularity file interactively.  However, for big jobs, you should write a bash script to load the singularity module and call the singularity command.  Here we use create a batch script for the example bids app (which does brain extraction using FSL). 

>cat runbet.sh

#!/bin/bash

#PBS -q standard
#PBS -l select=1:ncpus=1:mem=1gb:pcmem=1gb

### Specify a name for the job
#PBS -N BETest

### Specify the group name
#PBS -W group_list=dkp

### Used if job requires partial node only
#PBS -l place=pack:shared

### CPUtime required in hhh:mm:ss.
### Leading 0's can be omitted e.g 48:0:0 sets 48 hours
#PBS -l cput=00:02:00

### Walltime is created by cputime divided by total cores.
### This field can be overwritten by a longer time
#PBS -l walltime=00:02:00

### Joins standard error and standard out
#PBS -j oe

#PBS -M hickst@email.arizona.edu

# You must load the singularity module on the HPC before running it
module load singularity
cd ~/temp
# example command to run the singularity container for each participant
singularity run bet.img ds4 outputs participant
# example of command to run group stats after the participant level brain extraction is done
singularity run bet.img ds4 outputs group


To run the above, submit your bash script to the qsub system like this:

>qsub runbet.sh



High Performance Computing Singularity example: Freesurfer

And here's a runFS.sh for the freesurfer singularity image (fs.simg).  We are running under home on the HPC which is easy.  We have increased the number of cpus, memory and expected cpu and walltime.  Note, however, experimentation suggests that freesurfer uses one cpu only most of the time...so 4 cpus was a waste.  In addition, the HPC cpus are not fast and don't offer any speed improvement:

#!/bin/bash


#PBS -q standard

#PBS -l select=1:ncpus=4:mem=24gb:pcmem=6gb


### Specify a name for the job

#PBS -N FSTest


### Specify the group name

#PBS -W group_list=dkp


### Used if job requires partial node only

#PBS -l place=pack:shared


### CPUtime required in hhh:mm:ss.

### Leading 0's can be omitted e.g 48:0:0 sets 48 hours

#PBS -l cput=64:00:00


### Walltime is created by cputime divided by total cores.

### This field can be overwritten by a longer time

#PBS -l walltime=16:00:00


### Joins standard error and standard out

#PBS -j oe


#PBS -M dkp@email.arizona.edu


export WORK=~/fs_sing


module load singularity


singularity run ${WORK}/fs.simg ${WORK}/ds4 ${WORK}/outputs participant --license_file ${WORK}/license.txt --n_cpus 4


For fun, you can run the singularity module interactively:

First, see which modules are loaded:

>module list

Then load the singularity module:

>module load singularity




MRIQC

MRIQC is a quality assessment BIDS app.  You can read more about what it does and see how to use it on our HPC in Chidi Ugonna's useful talk.


MRtrix3_connectome


docker run -i -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 01 --parcellation desikan

If you run with debug, then the output gets written to the local disk outside the container.

docker run -i -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /tmp/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 01 --parcellation desikan --debug

docker run -i --rm -v /Volumes/Main/Working/DockerMRtrix:/bids_dataset -v /Volumes/Main/Working/DockerMRtrix/derivatives:/outputs bids/mrtrix3_connectome /bids_dataset /outputs participant --participant_label 02 --parcellation desikan --debug




Comments