PHyML

mpi phyml 

Here is a example to run mpi version of phyml under slurm on Oscar:

# log on to oscar:

# from Windows:

#from Mac:

# copy example sequence file
cp /gpfs/runtime/opt/phyml/3.0/example/seq .

# take a look at the sequences
head seq

 60 500
T25       ACTATTGAAAGAAGGGGGTTCCTAGATATCTGCGAGTATAATCGTGCTTGGTCTCCTATCGATGCGCATCGGACCGAGAGGCTC
TCCAGCCATGTGGACGGAGTAGCGCAGGGATCAAGGGAACACGCGGTGACCATTAGGATCTTGAACGCATCCGAGAGGCGTGTGAAGTGCGAGT
TCGTCAAAGAGTTTTTCTTTTCCCATAGTGCAGATCTAGCATTCCGACTATGCGTAACTCCTCGGAGAAGCGTACAACATTTTTGTTTCGAGGC
GACTATATCTGCGAGATTTCAACCAAGACAGTAAGCAACTTACCGAACTAGAAAAGGGTATTCTGTGGCGCCCCAACACCGCGAATAATCGCCG
CTTGTTCCGCATCACTACGATCATCTTGAGCCCGTGTCCTCTGTAGGTGTATCCCCACGACCCGCAGAACCGAGAGGTCAGATCACGGAGTAGAGTGACAGGACCACACGCCCACCCTTCTGACCCCAGGCTTA
T16       ATTAATCAAAGTAGGCGGGGCGGCCGTAGATGCTAAGAAAATCGAGTTCGGTCACCTCCCATTGGGCAGCAGATCGCTAGGCTC
TTTAGCCAGGTGGACGTAGAAGCGAAAGGATGAGGGGACGGTGGTGTTACGGATAGGTTCTTGAACAAAGCGTGGAGGTGTGTGAAGCGAGAGC
ACTTCAGAGATTTTCGGTTTTATAATCGCACAGCTCCAAAATTGCGCATGAGCGTAACCCGTCGGTGAAGCGTGCGGTAATTTGGTGTAGAGGC
ATCTCTATCTGCGTACCCTCAACTGCCCCTGTCAGCGAATTCGGGAAATAAGAACGGGTTTTCGATGCCGCCCCCAAACGGAGCATAAATGTGA
TTCATTCCGACTGACTTCGATTATCTTGATTCTGTATCGTCTGGAGTTGGATCTCCAGGCCGCGCAGCACGTCGCGGTGAGAACACGTTGTAGATACCCCGTACAAAGACCCCACCCTTCTAACCCCTGGCATT

# copy example sbatch script:
cp /gpfs/runtime/opt/phyml/3.0/example/mpiphyml .

# take a look at the script
less mpiphyml 
#!/bin/bash

# Request half hour of runtime:
#SBATCH --time=30:00

# Use 2 nodes with 8 tasks each, for 16 MPI tasks:
#SBATCH --nodes=2
#SBATCH --tasks-per-node=8

# Specify a job name:
#SBATCH -J MyMPIJob

# Specify an output file
#SBATCH -o out_MyMPIJob.%j
#SBATCH -e err_MyMPIJob.%j

# Load module and run a command
module load  phyml/3.0

SLURM_NPROCS=$(( $SLURM_NNODES * $SLURM_NTASKS_PER_NODE ))

mpirun -n $SLURM_NPROCS   phyml  -i seq 

# for help with phyml command line options, please visit:  http://www.atgc-montpellier.fr/phyml/  or type command: phyml --help

# Submit the job
sbatch mpiphyml
Submitted batch job 1274474

# To check job status
myq

Jobs for user ldong

Running:
(none)

Pending:
ID       NAME      PARTITION  CPUS  WALLTIME  EST.START  REASON
1274474  MyMPIJob  batch      2     30:00     N/A        (None)

# To check job status again
myq
Jobs for user ldong

Running:
ID       NAME      PARTITION  CPUS  WALLTIME  REMAINING  NODES
1274477  MyMPIJob  batch      16    30:00     29:59      node[157-158]

Pending:
(none)

# To check job status again
myq
Jobs for user ldong

Running:
(none)

Pending:
(none)






Comments