Help and quick tips

Find here below some quick useful tips to submit jobs and compile your code. For any problem feel free to contact us.
The email should include as Subject: Kultrun Support.

Contact e-mail address:

Modules system

Modules: basics

KULTRUN, has been configured to work on a module-based way. This allows users to easily set the environment needed to compile and run their codes.
The basic module command are

module - shows the list of module commands
module avail - shows a list of "available" modules
module list - shows a list of loaded modules
module load [name] - loads a module
module unload [name] - unloads a module
module help [name] - prints help for a module
module whatis [name] - prints info about the module
module purge - unload all modules
module swap [name1] [name2] - swap two modules

Modules can be loaded automatically in the .bashrc file.

Modules available on Kultrun

In Kultrun we have all the basics modules to compile standard codes, hdf5 library, openmpi, mpich, fftw2, fftw3 the gsl libraries, GNU and Intel compilers.
We can install new modules when needed. See here below a list of the current available modules.

-------------------------- /opt/modulos/modulefiles ----------------------------
casa/5.3.0-143.el7 gcc/4.9.4 hdf5/1.10.2 mpich/3.2.1
fftw/2.1.5 gcc/5.5.0 hdf5/1.10.2_intel openmpi/1.10.7
fftw/2.1.5_intel gsl/2.4 intel/2018.3.222 openmpi/2.1.3
fftw/3.3.8 gsl/2.4_intel mpich/1.5 openmpi/3.1.0

Typical batch scripts for Kultrun

To prepare job scripts you need so called "directives" and Slurm options.
Check it here for a comprehensive list.

MPI script INTEL env

#SBATCH --job-name=test
#SBATCH --partition=mapu
#SBATCH --ntasks-per-node=32

module load intel/2018.3.222
module load gsl/2.4_intel
module load hdf5/1.10.2_intel
module load fftw/2.1.5_intel

cd /home/user1/run_dir
srun --mpi=pmi2 ./code.exe input > output

MPI script GNU env

#SBATCH --job-name=test_name
#SBATCH --partition=kutral_amd
#SBATCH --ntasks-per-node=32

module load openmpi/2.1.3

cd /home/user2/run_dir

mpiexec ./code input > ouput

MPI script on scratch dir

#SBATCH --job-name=test_name
#SBATCH --partition=kutral_amd
#SBATCH --ntasks-per-node=32

module load openmpi/2.1.3

export SCRDIR=/scratch/${SLURM_JOB_ID}
mkdir $SCRDIR
cp -rp * $SCRDIR/
mpiexec ./code input > ouput
rm -rf $SCRDIR

Hybrid (MPI+OpenMP)

#SBATCH --job-name=hybrid
#SBATCH --output=hydrid_job.txt
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=5
#SBATCH --nodes=2


mpirun hello_hybrid.mpi

It is important to note that the usage of mpirun and mpiexec is identical.
On the opposite, for intel applications we strongly suggest to use srun with the option --mpi=pmi2 to let SLURM set the right environment (more here).