If you are here you probably already know about fmriprep and BIDS, so no need to get into that. Let's get started.
The easiest way to use use fmriprep is through a docker container (learn more about them here). You need to install it in your system in case it is not there yet (click here to learn how to). Bear in mind that, in order to use docker, you need admin rights in your system. In case you cannot get those, you can use singularity.
Below and also here you will find an example shell script that I use to make the fmriprep call using docker.
########################## fmriprep ##################### # This script runs a docker image of fmriPrep on files on BIDS format.## Javier Ortiz-Tudela (ortiz-tudela@psych.uni-frankfurt.de)############################################################################################################Just place this snippet of code in your machine, adapt the paths to match yours and select which options you want to turn on or off for fmriprep (check the full list here). Most of the things can be set to default but there are a few options that you will probably want to fiddle with:
Options for handling performance: fmriprep runs some heavy stuff on your images so you might want to restrict how much of your machine power it can harvest; find here the recommendations from their site. I would not recommend less than 10-12 GB of RAM for one subject. Use the number of threads argument to restrict the number of subprocesses run in parallel (too many things in parallel will top up your RAM and crash the entire thing!).
--output-spaces: your preprocessed data will be resampled into whichever you put here. You can specify the space and the resolution (this last option is only available for standard spaces). See here a more detailed guide on spaces. NOTE: If you want to use AROMA in fmriprep to clean your data, you can only use MNI152NLin6Asym; if you want to preprocess your data with fmriprep and then run AROMA manually, it is also possible (see here how I did it).
Getting the whole thing running the first time might sound daunting but after you have done it for one dataset (and computer), then it will be super easy to run more people and to transfer that knowledge to new projects.