#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to our queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
leading code for computational materials discovery. Visit s
USPEX is free to use software licensed on person level, not group.
So by default each member of the group needs to install USPEX for himself / herself.
Below I provide the steps you need to follow in order to install USPEX and run a simple example.
In the second section I provide hints and examples on how to run the USPEX on our cluster for structure optimization.
The following instructions will guide you through an installation of USPEX version 10.5.0 on the MSU-HPCC.
Create an account at uspex-team.org. You will have to provide your name and university affiliation.
Download the most recent version of USPEX which, as of May 18th, 2022, is 10.5.0. The archive file should be called USPEX-10.5.tar.gz.
Transfer this file to a location on the MSU-HPCC in your home directory that is suitable for installation.
Extract files from the archive file.
tar -xvf USPEX-10.5.tar.gz
This will create a USPEX_v10.5/ directory. The remaining steps are explained in USPEX_v10.5/README
Python library installation
MSU-HPCC has most of the libraries already installed, but if you are missing some you may have to install them with pip3 as instructed in the README file, copied here:
To make use of all features of USPEX, one needs to
install the following python libraries for python3.6 - python3.9:
In the HPCC you can load up a Python3.8 instance:
- module purge
- module load Python/3.8
Create a python virtual environment specifically for USPEX
- virtualenv --system-site-packages py3_uspex
To load the environment
- . py3_uspex/bin/activate
Necessary libs: numpy scipy spglib pysqlite ase
Additional libs: matplotlib requests zipfile torch pylada
How to install these libraries?
** If you HAVE root privilege:
- sudo apt-get install libsqlite3-dev
- sudo apt-get install python-pip3
- sudo pip3 install numpy scipy spglib pysqlite ase matplotlib requests torch zipfilepandas openpyxl
** If you do NOT have root privilege:
You can download and install libsqlite3-dev from the
following link: wget http://archive.ubuntu.com/ubuntu/pool/main/s/sqlite3/sqlite3_3.11.0.orig.tar.xz
- pip install --user numpy scipy spglib pysqlite3 ase matplotlib requests torch zipfile pandas openpyxl
*** For running pmpath.py user must install "pylada"
library for python3.
*** For calculation of elastic properties using Machine Learning, "torch"
library must be installed for python3.
*** For updating USPEX using "USPEX -u", "requests" and "zipfile" libraries
*** For running our parallelization scripts, "pandas" and "openpyxl" are needed
Run the installation file and follow the prompts.
./install.sh
Provide proper access to your installation, as instructed in the README
chmod +rw -R /installation PATH of USPEX/application/archive/
Environment variables should have automatically been defined at the end of your ~/.bashrc. Confirm they are there:
export MCRROOT=/installation PATH of MCR/
export PATH=/installation PATH of USPEX/application/archive/:$PATH
export USPEXPATH=/installation PATH of USPEX/application/archive/src
You may also have to source your ~/.bashrc at this time:
source ~/.bashrc
Create a new directory for USPEX jobs and create an Examples subdirectory.
Confirm that USPEX is working with the following commands:
USPEX -h returns flags USPEX can use. The output should look like this:
Usage: USPEX OPTIONS
Options:
-h, --help show this help message and exit
-v, --version show program's version number and exit
-p, --parameter specify parameter to get help. If no value or 'all'
value is specified, all INPUT.txt parameters will be
shown
-e, --example show USPEX example details. If no value or 'all' value
is specified, all examples will be shown
-c NUM, --copy=NUM copy the INPUT.txt file and Specific Folder of
ExampleXX.
-g, --generate generate directories for preparing an USPEX calculation,
including AntiSeeds, Seeds, Specific, Submission folders
-r, --run run USPEX calculation
-u, --update update USPEX to its latest version
--clean clean calculation folder
USPEX -v returns the version. The output should look like this:
USPEX version 10.5.0 (08/07/2021)
USPEX -e 13 prints information about example calculation #13. We select this example calculation because it is the only one which uses no external codes so it is simplest and we test it first. The output should look like this:
EX13-3D_special_quasirandom_structure_TiCoO: USPEX can easily find the most disordered alloy structure. Here, this is shown for the TixCo(1-x)O. You need to specify the initial structure in /Seeds/POSCARS and use only the permutation operator. In this case, you do not need to use any external codes. In this example, we optimize (minimize) the structural order (Oganov and Valle (2009); Lyakhov Oganov Valle (2010)) without relaxation (abinitioCode = 0). Seed structure (supercell of Ti-Cu-O-structure) is permutated in a search of the permutant with the minimum/maximum order. Minimizing order in this situation, one gets a generalized version of the "special quasirandom structure".
Now we will test to see that example calculation #13 works. Run the following commands:
mkdir EX13
cd EX13
USPEX -c 13
USPEX -r
Confirm the calculation runs.
After installing USPEX you should be able to run structure optimization in various QM codes. Here we show how to interface USPEX with VASP, but for a comprehensive list and guide check out the USPEX manual.
In order to run USPEX, you need to prepare the following inputs:
a) INPUT.txt # Input file describing the compound you wish to find the structure for + settings of which external soft to use (VASP / GULP etc.)
b) Specific # Folder that contains INCAR file describing the optimization routine parameters at each step (INCAR_1, INCAR_2, ...) and
POTCAR files for all the elements present in your compound (POTCAR_Ga and POTCAR_N for GaN).
c) Submission # Folder containing scripts defining how to run the simulations, and check the status:
submitJob_local.py and checkStatus_local.py for running on our local cluster (_remote for the case when cluster and your storage
are in different places (not our case)).
d) run_uspex.sh # The script to run USPEX in the loop (universal).
Since USPEX at each call just launches a series of new simulations, and does not launch the next portion automatically,
one needs to call it continuously in a loop. It will automatically pick up the current state of simulation, and run new ones, if some have finished.
optional e) ParallelUSPEX.py and ConvexHullTest.xlsx # The xlsx used as a reference for a list of compositions to test,
and the python script that parallelizes the submission of them.
INPUT.txt
PARAMETERS EVOLUTIONARY ALGORITHM
******************************************
* TYPE OF RUN AND SYSTEM *
******************************************
USPEX : calculationMethod (USPEX, VCNEB, META)
300 : calculationType (dimension: 0-3; molecule: 0/1; varcomp: 0/1)
1 : AutoFrac
% optType
1
% EndOptType
% atomType
Ag Br S
% EndAtomType
% numSpecies
19 7 6
% EndNumSpecies
*** Estimated atomic volume ***
% Latticevalues
% Endvalues
******************************************
* POPULATION *
*******************************************
50 : populationSize (how many individuals per generation)
50 : initialPopSize
75 : numGenerations (how many generations shall be calculated)
8 : stopCrit
0 : reoptOld
0.6 : bestFrac
******************************************
* VARIATION OPERATORS *
******************************************
0.50 : fracGene (fraction of generation produced by heredity)
0.30 : racRand (fraction of generation produced randomly from space groups)
0.20 : fracAtomsMut (fraction of the generation produced by softmutation)
0.00 : fracLatMut (fraction of the generation produced by softmutation)
0.00 : fracPerm
*****************************************
* DETAILS OF AB INITIO CALCULATIONS *
*****************************************
% abinitioCode
1 1 1 1 1 1
% ENDabinit
% KresolStart
0.13 0.11 0.09 0.07 0.05 0.04
% Kresolend
% commandExecutable
srun vasp_std
% EndExecutable
1 : whichCluster (0: no-job-script, 1: local submiission, 2: remote submission)
20 : numParallelCalcs
0.00001: ExternalPressure
The idea is to first make a rough optimization fast, and then from obtained approximation, run more accurate optimization.
INCAR_1 example:
YSTEM = GaN
IBRION = 2
ISMEAR = 0
ISYM = 2
POTIM = 0.050
ISTART = 0
LCHARG = FALSE
LWAVE = FALSE
LSORBIT = FALSE
NELM = 200
NELMIN = 6
GGA = PE
NWRITE = 0
NCORE = 4
# CHANGING PART
PREC = MED
EDIFF = 1e-3
ISIF = 2
NSW = 40
EDIFFG = -1e-1
INCAR_2:
# CHANGING PART
PREC = MED
EDIFF = 1e-4
ISIF = 4
NSW = 40
EDIFFG = -1e-2
INCAR_3:
# CHANGING PART
PREC = Normal
EDIFF = 3e-5
ISIF = 3
NSW = 40
EDIFFG = -3e-3
ISMEAR = -5
INCAR_4:
# CHANGING PART
PREC = High
EDIFF = 1e-5
ISIF = 3
NSW = 40
EDIFFG = -1e-3
ISMEAR = -5
Can go further and add say INCAR_5 and INCAR_6 with even higher precision.
To get the POTCAR files, navigate to: /mnt/research/mendozacortes_group/VASP6..potpaw*
Note the USPEX needs them to be formatted like so: POTCAR_Ga and POTCAR_N for GaN
submitJob_local.py
from __future__ import with_statement
from __future__ import absolute_import
from subprocess import check_output
import re
import sys
from io import open
def submitJob_local(index, commnadExecutable):
u"""
This routine is to submit job locally
One needs to do a little edit based on your own case.
Step 1: to prepare the job script which is required by your supercomputer
Step 2: to submit the job with the command like qsub, bsub, llsubmit, .etc.
Step 3: to get the jobID from the screen message
:return: job ID
"""
# Step 1
myrun_content = ''
myrun_content += '#!/bin/bash --login\n'
myrun_content += '#SBATCH -o out\n'
myrun_content += '#SBATCH -J USPEX-' + str(index) + '\n'
myrun_content += '#SBATCH -t 4:00:00\n'
myrun_content += '#SBATCH -N 1\n'
myrun_content += '#SBATCH -A general\n'
myrun_content += '#SBATCH -n 8\n'
myrun_content += 'module purge\n'
myrun_content += 'module load VASP/6.2.1-nvofbf-2022.07-OpenMP\n'
#myrun_content += 'module load gnu-openmpi\n'
# myrun_content += 'cd ${PBS_O_WORKDIR}\n' check this, must have /cephfs suffix with SBATCH in my case
myrun_content += commnadExecutable + '\n'
with open('myrun', 'w') as fp:
fp.write(myrun_content)
# Step 2
# It will output some message on the screen like '2350873.nano.cfn.bnl.local'
output = str(check_output('sbatch myrun', shell=True))
# Step 3
# Here we parse job ID from the output of previous command
print(output)
jobNumber = int(re.findall(r'\d+', output)[0])
return jobNumber
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-i', dest='index', type=int)
parser.add_argument('-c', dest='commnadExecutable')
args = parser.parse_args()
jobNumber = submitJob_local(index=args.index, commnadExecutable=args.commnadExecutable)
print('<CALLRESULT>')
print(int(jobNumber))
Since USPEX at each call just launches a series of new simulations, and does not launch the next portion automatically,
one needs to call it continuously in a loop. It will automatically pick up the current state of simulation, and run new ones, if some have finished.
run_uspex.sh
while true; do
date >> log
USPEX -r >> log
sleep 5
done
This script is fed the list of compositions from an xlsx worksheet. It then prepares each calculation by modifying the input folder template.
Each new composition is then run in parallel.
Note please make sure to load and activate the py3_uspex environment made in the installation steps. Also, you will need to update all paths referenced in this script, the xlsx sheet info, and n number of compositions.
ParallelUSPEX.py
import os
import pandas as pd
import numpy as np
import concurrent.futures
n=4 # this should match the number of compositions being tested
# Opens Comoposition list
WS = pd.read_excel('ConvexHullTest.xlsx','AgBrS',engine='openpyxl')
WS_np = np.array(WS)
# Sorts compositions as x,y,z in AgxClySez
xyz = []
for i in range(n):
xyz.append([WS_np[i+1,4],WS_np[i+1,5],WS_np[i+1,6]])
# copies and modifies example input folder to calculate specific composition
def USPEX(xyz):
#AgxClySez
x = xyz[1][0]
y = xyz[1][1]
z = xyz[1][2]
os.chdir('/mnt/home/djokicma/USPEX/AgxBrySz/inputs/')
os.system('mkdir ../Ag' + str(x) + 'Br' + str(y) + 'S' + str(z))
os.system('cp -r * ../Ag' + str(x) + 'Br' + str(y) + 'S' + str(z)+'/')
# move directories
os.chdir('../Ag' + str(x) + 'Br' + str(y) + 'S' + str(z)+'/')
file_to_search = os.getcwd()
for rootdir, dirs, files in os.walk(file_to_search):
for f in files:
if "INCAR" in f:
with open(rootdir + '/' + f,'r') as fi:
l = list(fi)
#replace system name in INCAR file to match AgxClySez
l[0] = 'SYSTEM = Ag' + str(x) + 'Br' + str(y) + 'S' + str(z) +'\n'
with open(rootdir + '/' + f,'w') as output:
for lines in l:
output.write(lines)
if "INPUT.txt" in f:
with open(rootdir + '/' + f,'r') as fi:
l = list(fi)
#replace system name in INPUT file to match AgxClySez
l[17] = str(x) + ' ' + str(y) + ' ' + str(z) +'\n'
with open(rootdir + '/' + f,'w') as output:
for lines in l:
output.write(lines)
os.system('./run_uspex.sh')
list_dat =[ (1,j) for j in xyz]
with concurrent.futures.ProcessPoolExecutor(max_workers=n) as executor:
executor.map(USPEX, list_dat)
Note: USPEX runs on python 2.7 and requires some additional packages.
Below I provide an instruction on how to install them locally. A better solution would be to have python-2.7 virtual environment for the group.
1. Download Anaconda python distribution for python 2.7 from https://www.anaconda.com/distribution/
2. Install Anaconda to some local directory, say /gpfs/home/$USER/anaconda2/
3. After installing Anaconda, relogin and check that python is properly recognized
[$USER@hpc-login-vm2 ~]$ which python
~/anaconda2/bin/python
4. For USPEX you need to have the following python packages: numpy, scipy, matplotlib; spglib, pysqlite, pysqlite3, ase
The first three are in the anaconda distribution (so no need to install them).
In order to install the remainder please run the following commands:
conda install -c conda-forge spglib
conda install -c anaconda sqlite
conda install -c matsci ase
5. After step #4 launch python (or ipython) in your console and try importing necessary packages:
$ python
Python 2.7.16 |Anaconda, Inc.| (default, Mar 14 2019, 21:00:58)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ase
>>> import spglib
>>> import sqlite3
>>> import numpy
>>> import scipy
>>> import matplotlib
Note: we are now done with python, installing USPEX
6. Download USPEX distribution from https://uspex-team.org/en/uspex/downloads (need to create an account there if you have none)
It comes as an archive USPEX_release.tar.gz , which you need to extract, e.g.
$ tar -xzvf USPEX_release.tar.gz
7. After step #6 you should have a folder called USPEX_v10.3 containing USPEX installation script and README file.
Suggest you read the README, as it gives quite detailed scheme on how to proceed with installation further.
The rest is mostly based on what is given there.
8. Go to USPEX_v10.3 and run the installation script
$ cd USPEX_v10.3
$ chmod +x install.sh
$ ./install.sh
Hint1: when asked by the installer, please use console installation, as it will allow automatic setting of needed environment variables to your ~/.bashrc
Hint2: select the local directory to store the USPEX, say USPEX_installation
9. Update environment variables, as we need to get the ones for running USPEX
$ source ~/.bashrc
10. Check that USPEX is up and available
$ USPEX -h # should display the help message
11. Download USPEX manual from https://uspex-team.org/static/file/uspex_manual_english_10.2.pdf
12. Run an example not requiring additional software (e.g. VASP, LAMMPS etc.) (see page 21 of the manual)
$ mkdir EX13 # creating a directory for the experiment
$ cd EX13 # moving to the directory
$ USPEX -c 13 # copying necessary files to run the experiment
$ cd .. # exiting to parent directory
$ chmod -R +wx EX13 # giving write and execute permissions to yourself
$ chmod -R g+wx EX13 # giving write and execute permissions to your group
$ cd EX13
$ USPEX -r # launching USPEX on this data
13. Consult with manual for further usage of USPEX.
After installing USPEX you should be able to run structure optimization.
As an example, below I provide step by step instruction on how to run the structure optimization for GaN.
It is straightforward though to make a script to adjust the inputs for any other compound.
All the files described below together with the result for GaN is available here: /gpfs/research/mendozagroup/USPEX/Structure_optimization_example/GaN
In order to run USPEX, you need to prepare the following inputs:
a) INPUT.txt # input file describing the compound you wish to find the structure for + settings of which external soft to use (VASP / GULP etc.)
b) Specific # folder that contains INCAR file describing the optimization routine parameters at each step (INCAR_1, INCAR_2, ...) and
POTCAR files for all the elements present in your compound (POTCAR_Ga and POTCAR_N for GaN).
c) Submission # Folder containing scripts defining how to run the simulations, and check the status:
submitJob_local.py and checkStatus_local.py for running on our local cluster (_remote for the case when cluster and your storage
are in different places (not our case)).
d) run_uspex.sh # the script to run USPEX in the loop (universal).
Since USPEX at each call just launches a series of new simulations, and does not launch the next portion automatically,
one needs to call it continuously in a loop. It will automatically pick up the current state of simulation, and run new ones, if some have finished.
run_uspex.sh
while true; do
date >> log
USPEX -r >> log
sleep 5
done
Notes on preparing INCAR files:
INCAR files define the optimization parameters on each step of optimization.
The idea is to first make a rough optimization fast, and then from obtained approximation, run more accurate optimization.
INCAR_1 example:
YSTEM = GaN
IBRION = 2
ISMEAR = 0
ISYM = 2
POTIM = 0.050
ISTART = 0
LCHARG = FALSE
LWAVE = FALSE
LSORBIT = FALSE
NELM = 200
NELMIN = 6
GGA = PE
NWRITE = 0
NCORE = 4
# CHANGING PART
PREC = MED
EDIFF = 1e-3
ISIF = 2
NSW = 40
EDIFFG = -1e-1
INCAR_2:
# CHANGING PART
PREC = MED
EDIFF = 1e-4
ISIF = 4
NSW = 40
EDIFFG = -1e-2
INCAR_3:
# CHANGING PART
PREC = Normal
EDIFF = 3e-5
ISIF = 3
NSW = 40
EDIFFG = -3e-3
ISMEAR = -5
INCAR_4:
# CHANGING PART
PREC = High
EDIFF = 1e-5
ISIF = 3
NSW = 40
EDIFFG = -1e-3
ISMEAR = -5
Can go further and add say INCAR_5 and INCAR_6 with even higher precision.
Preparing INPUT.txt
Below I provide an example of INPUT.txt file and guide through the parts you need to change to run the structure optimization for a different compound.
PARAMETERS EVOLUTIONARY ALGORITHM
******************************************
* TYPE OF RUN AND SYSTEM *
******************************************
USPEX : calculationMethod (USPEX, VCNEB, META)
300 : calculationType (dimension: 0-3; molecule: 0/1; varcomp: 0/1)
1 : AutoFrac
% optType
1
% EndOptType
**This is the list of elements used in our compounds without multiplicity.
% atomType
Ga N
% EndAtomType
**This is multiplicity of elements in the same order they are given in atomType
% numSpecies
1 1
% EndNumSpecies
**This are the valences of elements in the same order as in atomType
% valences
3 3
% EndValences
*** Estimated atomic volume (can be skipped, if no prior)***
% Latticevalues
150
% Endvalues
******************************************
* POPULATION *
******************************************
20 : populationSize (how many individuals per generation)
20 : initialPopSize
25 : numGenerations (how many generations shall be calculated)
8 : stopCrit
0 : reoptOld
0.6 : bestFrac
******************************************
* VARIATION OPERATORS *
******************************************
0.50 : fracGene (fraction of generation produced by heredity)
0.30 : fracRand (fraction of generation produced randomly from space groups)
0.20 : fracAtomsMut (fraction of the generation produced by softmutation)
0.00 : fracLatMut (fraction of the generation produced by softmutation)
0.00 : fracPerm
*****************************************
* DETAILS OF AB INITIO CALCULATIONS *
*****************************************
** Using VASP for the computation on each step (space as separator, not comma!)
% abinitioCode
1 1 1 1
% ENDabinit
% KresolStart
0.13 0.11 0.09 0.07 0.05 0.04
% Kresolend
**Command to run VASP (precise interaction in cluster is defined in Submission folder).
**Our VASP is vasp_std, not vasp.
% commandExecutable
mpirun -np 8 vasp_std
% EndExecutable
**Using local cluster to simulate (no need to make additional ssh / scp, as storage is accessible from cluster)
1 : whichCluster (0: no-job-script, 1: local submission, 2: remote submission)
10 : numParallelCalcs
0.00001: ExternalPressure
Preparing script for submitting jobs to cluster and getting status
submitJob_local.py
We need to modify the default submission script to be able to work on our cluster.
def submitJob_local(index, commnadExecutable):
# Step 1
myrun_content = u''
myrun_content += u'#!/bin/sh\n'
myrun_content += u'#SBATCH -o out\n'
myrun_content += u'#SBATCH -p mendoza_q\n' # using mendoza queue
myrun_content += u'#SBATCH -J USPEX-' + unicode(index) + u'\n'
myrun_content += u'#SBATCH -t 06:00:00\n'
myrun_content += u'#SBATCH -N 1\n'
myrun_content += u'#SBATCH -n 8\n'
myrun_content += u'module purge\n'
myrun_content += u'module load gnu\n'
#myrun_content += u'module load gnu-openmpi\n'
myrun_content += u'module load vasp\n'
# myrun_content += 'cd ${PBS_O_WORKDIR}\n' check this, must have /cephfs suffix with SBATCH in my case
myrun_content += commnadExecutable + u'\n'
with open(u'myrun', u'w') as fp:
fp.write(myrun_content)
# Step 2
# It will output some message on the screen like '2350873.nano.cfn.bnl.local'
output = unicode(check_output(u'sbatch myrun', shell=True))
# Step 3
# Here we parse job ID from the output of previous command
jobNumber = int(re.findall(ur'\d+', output)[0])
return jobNumber
checkStatus_local.py
def checkStatus_local(jobID):
# Step 1
try:
output = unicode(check_output(u'squeue -j {}'.format(jobID), shell=True))
# need to use proper queue status check (squeue for our cluster)
except:
return True
# Step 2
doneOr = True
if u' R ' in output or u' Q ' in output or u' PD ' in output or u' CG ' in output:
# Need to list all the statuses the job can have (default list leads to too early checking for results)
doneOr = False
if doneOr:
for file in glob.glob(u'USPEX*'):
os.remove(file) # to remove the log file
return doneOr
Interpreting results
Once the simulation is finished the USPEX will produce the file USPEX_IS_DONE.
Since we are running USPEX in infinite loop using run_uspex.sh, you need to kill run_uspex.sh, once the simulation is finished.
Results are stored in folder resultsX, where X is the number of results folder (e.g. results1).
1. BESTIndividuals
This files contains information on best structures together with their enthalpy, e.g.
Gen ID Origin Composition Enthalpy Volume Density Fitness KPOINTS SYMM Q_entr A_order S_order
(eV) (A^3) (g/cm^3)
1 14 RandTop [ 1 1 ] -12.160 24.142 5.759 -12.160 [ 8 10 10] 216 -0.000 6.040 6.043
2 28 keptBest [ 1 1 ] -12.160 24.142 5.759 -12.160 [ 8 10 10] 216 -0.000 6.040 6.043
3 46 keptBest [ 1 1 ] -12.160 24.142 5.759 -12.160 [ 8 10 10] 216 -0.000 6.040 6.043
The above is taken from GaN example. Here already in the first generation the algorithm discovered the correct structure, which was not improved in subsequent generations. The best structure is the structure with ID=14 having space group 216.
2. symmetrized_structures.cif
The file contains CIF format description of all the structures examined, while searching for the correct structure.
Structures are given ordered by ID starting from
data_findsym-STRUC-${ID}
E.g.
...
data_findsym-STRUC-14
... # CIF for structure #14
data_findsym-STRUC-15
... # CIF for structure #15
...
Note: since you may further use the optimized structure to compute its characteristics (e.g. bandgap) you may need to get its CIF definition.
This is easily doable by taking structure with lowest Enthalpy from BESTIndividuals (first row of the table), and extracting its CIF from symmetrized_structures.cif
3. Individuals
This file contains the same information as file BESTIndividuals, but for all the structures looked through during optimization.
4. gatheredPOSCARS_order
This file containts POSCAR format definition for all the structures considered in optimization.
POSCAR definition of each compound starts with EA${ID}, e.g.
Direct
...
EA14 3.242 3.244 3.243 90.08 60.05 60.05 Sym.group: 216
1.0
-0.845325 -1.252046 2.869074
-3.086359 0.153216 0.988142
-0.020206 -3.215557 0.420656
Ga N
1 1
Direct
0.503922 0.889658 0.631900 6.040085
0.003677 0.139913 0.381945 6.040085
EA15 3.244 3.244 3.243 60.04 89.96 119.97 Sym.group: 216
...
Again, you can use the file BESTIndividuals to determine the ID of the best structure, and extract its POSCAR from gatheredPOSCARS_order.
Same information only for the best structure is given in BESTgatheredPOSCARS with the same order as in BESTIndividuals, so there you can just take the first POSCAR (still worse referencing by ID for robustness to further convention changes).