# About the VASP install on Stallo¶

This page contains information related to the installation of VASP on Stallo. Some of this is relevant also for self-compilation of the code, for those who want to give this a try.

## VASP on Stallo:¶

Note that the VASP installation on Stallo mainly follows the standard syntax introduced by the vASP team with their new installation scheme. Based on their system, we have added two binaries - as commented under.

If you do

module avail VASP


on Stallo, you will notice that for version 5.4.1 and onwards, there is a dramatic increase in the available binaries. And this might appear confusing.

First; All versions of VASP is compiled with the support for maximally-localised Wannier functions and the Wannier90 program and also the MPI flag in FPP (-DMPI)

Second; Each release of VASP is compiled in two different versions; “tooled” and “plain”.

• VASP/x.y.z.tooled is a version where all necessary support for Texas transition state tools (vTST) and the explicit solvation model (VSPsol) and BEEF is added.
• VASP/x.y.z.plain is the version without this support/ additions. (Relatively unmodified source).

The reason for this is that we are uncertain of effects on the tooles on calculated numbers, due to reproducibility, we have chosen to hold the different versions separate.

Then it starts getting interesting: For each version, there are 15 different binaries - consisting of 3 groups of 5.

• unmodified group; binaries without any additional name to them; vasp_std
• noshear: vasp_std_noshear
• abfix: vasp_std_abfix

The unmodified group is compiled on basis on the unmodified set of fortran files that comes with the code. The noshear version is compiled with a modified version of the file constr_cell_relax.F file where shear forces are not calculated. The abfix version is compiled with a modified version of the constr_cell_relax.F file where two lattice vectors are fixed.

There are 5 different binaries in each group, all compiled with the same FPP settings (mentioned below):

• vasp_std
• vasp_gam
• vasp_ncl

These are familiar from the standard build system that is provided with the code. In addtion to these, we have

• vasp_tbdyn
• vasp_gam_tbdyn

## FPP settings for each binary:¶

1. vasp_std is compiled with the following additional FPP flag(s): -DNGZhalf
2. vasp_gam is compiled with the following additional FPP flag(s): -DNGZhalf -DwNGZhalf
3. vasp_ncl is compiled with the following additional FPP flag(s): no modifcations in FPP settings
4. vasp_tbdyn is compiled with the following additional FPP flag(s): -DNGZhalf -Dtbdyn
5. vasp_gam_tbdyn is compiled with the following additional FPP flag(s): -DNGZhalf -DwNGZhalf -Dtbdyn

We would be happy to provide a copy of our build scripts (patches) upon request.

## About memory allocation for VASP:¶

VASP is known to be potentially memory demanding. Quite often, you might experience to use less than the full number of cores on the node, but still all of the memory.

For core-count, node-count and amounts of memory on Stallo, see About Stallo.

There are two important considerations to make:

First: Make sure that you are using the SBATCH –exclusive flag in your run script. Second: How to allocate all the memory:

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 #!/bin/bash -l ##################################################### # example for a job where we consume lots of memory # ##################################################### #SBATCH --job-name=example # we ask for 1 node #SBATCH --nodes=1 # run for five minutes # d-hh:mm:ss #SBATCH --time=0-00:05:00 # short partition should do it #SBATCH --partition short # total memory for this job # this is a hard limit # note that if you ask for more than one CPU has, your account gets # charged for the other (idle) CPUs as well #SBATCH --mem=31000MB # turn on all mail notification #SBATCH --mail-type=ALL # you may not place bash commands before the last SBATCH directive # define and create a unique scratch directory SCRATCH_DIRECTORY=/global/work/${USER}/example/${SLURM_JOBID} mkdir -p ${SCRATCH_DIRECTORY} cd${SCRATCH_DIRECTORY} # we copy everything we need to the scratch directory # ${SLURM_SUBMIT_DIR} points to the path where this script was submitted from cp${SLURM_SUBMIT_DIR}/my_binary.x ${SCRATCH_DIRECTORY} # we execute the job and time it time ./my_binary.x > my_output # after the job is done we copy our output back to$SLURM_SUBMIT_DIR cp ${SCRATCH_DIRECTORY}/my_output${SLURM_SUBMIT_DIR} # we step out of the scratch directory and remove it cd ${SLURM_SUBMIT_DIR} rm -rf${SCRATCH_DIRECTORY} # happy end exit 0