If ParallelCluster was started in step 1.1., installing dependency packages for Relion is basically unnecessary because the dependency packages for Relion have already been installed on the HeadNode. If there are any packages added in the Relion upgrade, install them.
Usually the GCC compiler preinstalled on HeadNode's Ubuntu 20.04 is used, so installation is not required.
Note: If you want to change the GCC compiler version, you can do the following.
$ sudo apt install build-essential
$ sudo apt -y install gcc-7 g++-7 gcc-8 g++-8 gcc-9 g++-9
If using version 7
$ sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 7
$ sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 7
If using version 8
$ sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 8
$ sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-8 8
If using version 9
$ sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 9
$ sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 9
Start ParallelCluster and connect to the Head node via ssh in Cloud9 terminal.
Unload the module and load IntelMPI pre-installed in AWS Cloud9.
$ module purge
$ module load intelmpi
Loading intelmpi version 2021.9.0
$ module list
Currently Loaded Modulefiles:
1) intelmpi
Check the version of packages.
$ gcc --version
gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
$ g++ --version
g++ (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
$ which cmake pkg-config make gcc g++
/usr/bin/cmake
/usr/bin/pkg-config
/usr/bin/make
/usr/bin/gcc
/usr/bin/g++
$ cmake --version
cmake version 3.16.3
$ mpirun --version
Intel(R) MPI Library for Linux* OS, Version 2021.9 Build 20230307 (id: d82b3071db)
Copyright 2003-2023, Intel Corporation.
Check PATHs (In particular, make sure that the intel MPI path is passed.)
$ echo $PATH
/opt/intel/mpi/2021.9.0/libfabric/bin:
/opt/intel/mpi/2021.9.0/bin:
/opt/amazon/openmpi/bin/:
/opt/amazon/efa/bin/:
<<snip>>
/opt/aws/bin:
/opt/parallelcluster/pyenv/versions/3.7.16/envs/cfn_bootstrap_virtualenv/bin:
/opt/parallelcluster/pyenv/versions/3.9.16/envs/awsbatch_virtualenv/bin:
/opt/slurm/bin:
/efs/em/gtc_sh_ver00o05o01:
/efs/em/UCSF-Chimera64-1.15rc/bin
$ echo $LD_LIBRARY_PATH
/opt/intel/mpi/2021.9.0/libfabric/lib:
/opt/intel/mpi/2021.9.0/lib/release:
/opt/intel/mpi/2021.9.0/lib
Build and install Relion with GCC.
For Relion 4
$ cd /efs/em/
$ git clone https://github.com/3dem/relion.git #Not required if already done when building with other compiler
$ mv relion relion-v401
$ cd relion-v401
$ git checkout master #or git checkout ver4.0
$ git branch
* master
$ mkdir build-gcc-gpu
$ cd build-gcc-gpu
$ cmake -DFORCE_OWN_FFTW=ON \
-DCUDA_ARCH=75 \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/efs/em/relion-v401/relion-4.0.1-gcc-gpu ..
$ make -j 24 && make install
For Relion 5 or later
If you have not installed pyenv and anaconda, perform step2. before installing Relion 5.0.
$ cd /efs/em/
$ git clone https://github.com/3dem/relion.git #Not required if already done when building with other compiler
$ mv relion relion-v500-beta #Not required if already done when building with other compiler
$ cd relion-v500-beta
$ git checkout ver5.0
$ git branch
master
* ver5.0
Create the conda environment for Relion (Not required if already created)
$ source /efs/em/pyenv/versions/anaconda3-2023.03/etc/profile.d/conda.sh
$ conda env create -f environment.yml
$ conda env list
# conda environments:
#
base /efs/em/pyenv/versions/anaconda3-2023.03
cryolo-1.9.6 /efs/em/pyenv/versions/anaconda3-2023.03/envs/cryolo-1.9.6
relion-5.0 /efs/em/pyenv/versions/anaconda3-2023.03/envs/relion-5.0
schemes-editing /efs/em/pyenv/versions/anaconda3-2023.03/envs/schemes-editing
topaz-0.2.5 /efs/em/pyenv/versions/anaconda3-2023.03/envs/topaz-0.2.5
Note: You should NOT activate this relion-5.0 conda environment when compiling and using RELION. RELION activates it automatically only when necessary. See https://relion.readthedocs.io/en/latest/Installation.html for more information.
Build and install Relion 5 with GCC.
Specify the path to python in the conda environment created above for -DPYTHON_EXE_PATH.
Create a directory in EFS to download the trained models when cmake is executed and specify its PATH for -DTORCH_HOME_PATH.
(If no path to directory is specified or the specified path does not exist, the trained models is downloaded to /home/ubuntu/.cache/torch (on Headnode) and a cmake error occurs due to insufficient storage.)
$ mkdir relion_torch #Any directory name is acceptable.
$ mkdir relion_torch/v500-beta #Any directory name is acceptable.
$ mkdir relion_torch/v500-beta/torch #Any directory name is acceptable.
$ mkdir build-gcc-gpu #Any directory name is acceptable.
$ cd build-gcc-gpu
$ cmake -DFORCE_OWN_FFTW=ON \
-DCUDA_ARCH=75 \
-DCMAKE_BUILD_TYPE=Release \
-DPYTHON_EXE_PATH=/efs/em/pyenv/versions/anaconda3-2023.03/envs/relion-5.0/bin/python \
-DTORCH_HOME_PATH=/efs/em/relion_torch/v500-beta/torch \
-DCMAKE_INSTALL_PREFIX=/efs/em/relion-v500-beta-gpu/relion-5.0-beta-gcc-intelmpi-gpu ..
$ make -j 24 && make install
If the directories do not exist, create modulefiles directory
$ mkdir /efs/em/modulefiles/
$ mkdir /efs/em/modulefiles/relion/
Create a directory for each Relion version and ParallelCluster version.
$ cd /efs/em/modulefiles/relion/
$ mkdir 4.0.1-pc3.7.0
$ cd 4.0.1-pc3.7.0/
Change the PATHs of Relion , Ctffine and GCTF and create module file.
$ cat > intel_amd-gcc-intelmpi-gpu # cpu(manufacture name)-compiler-MPI-device
#%Module -*- tcl -*-
set RELION /efs/em/relion-v401/relion-4.0.1-gcc-gpu
prepend-path PATH $RELION/bin
prepend-path LD_LIBRARY_PATH $RELION/lib
setenv RELION_CTFFIND_EXECUTABLE /efs/em/ctffind-4.1.14-linux64/bin/ctffind
setenv RELION_GCTF_EXECUTABLE /efs/em/Gctf_v1.06/bin/Gctf-v1.06_sm_20_cu7.5_x86_64
setenv RELION_PDFVIEWER_EXECUTABLE /usr/bin/evince
setenv RELION_ERROR_LOCAL_MPI 96
setenv RELION_QSUB_TEMPLATE /efs/em/aws_slurm_relion.sh
# The mpi runtime ('mpirun' by default)
setenv RELION_QUEUE_USE yes
setenv RELION_QSUB_COMMAND sbatch
setenv RELION_SCRIPT_DIRECTORY $RELION/bin