Mozart HPC

High Performance Computational Cluster of Bagchi Group


  • Number computational nodes = 20 (compute00 to compute19) + 1 (cn23)

  • Number of processors = 300 + 24

  • Available queues: p12.q (15 nodes with 12 processors each), p24.q (5 nodes with 24 processors each)

  • Master node Connectivity (Infiniband Mellanox)

  • Storage: 35 TB + 35 TB Fijutsu units

  • Located in Room 103 (SSCU 1st Floor)

Sample submission script

#!/bin/bash

#$ -cwd -V

#$ -q p12.q #queue name

#$ -S /bin/bash

#$ -N jobname #jobname

#$ -pe make 12 #select the number of processors

#$ -l h_rt=200:00:00 #job wall time hh:mm:ss

#$ -l h="computeXX|computeYY" #XX,YY=comutational node name

#$ -R y


MPIRUN=/opt/intel/compilers_and_libraries_2017.2.174/linux/mpi/intel64/bin/mpirun #path of mpi

MDRUN=/opt/usoftware/gromacs-plumed/gromacs-5.0.7/bin/mdrun_mpi_d #path of GROMACS executable

LMP=/opt/usoftware/lammps-16Mar18/src/lmp_intel #path of lammps executable

NSLOTS=12 # same as line 6


$MPIRUN -np $NSLOTS $MDRUN -v -deffnm md #for GROMACS jobs

$MPIRUN -np $NSLOTS $LMP < in.lammps #for LAMMPS jobs

./executable #for FORTRAN jobs

Important commands

  • To check your job status: qstat

  • To check queue status: qstat -f

  • To check all job status: showq

  • To kill specific job: qdel job-id

  • Copy file remotely: scp -r file/folder username@ip:directory_path

  • Copy file remotely with append: rsync -avhP file/folder username@ip:directory_path

Available Packages

  • GROMACS-5.0.7 (gmx_mpi_d)

  • Plumed-2.2.4 (plumed)

  • LAMMPS (lmp_intel)

  • cp2K-6.1 (cp2k.popt, cp2k.sopt, cp2k.psmp, cp2k.ssmp)

  • Gaussian 09 (g09)

System Administrators

  • Ms. Sangita Mondal