Help and quick tips

Find here below some quick useful tips to submit jobs and compile your code. For any problem feel free to contact us.
The email should include as Subject: Kultrun Support.

Contact e-mail address:

Modules system

Modules: basics


KULTRUN, has been configured to work on a module-based way. This allows users to easily set the environment needed to compile and run their codes.
The basic module command are

module - shows the list of module commands
module avail - shows a list of "available" modules
module list - shows a list of loaded modules
module load [name] - loads a module
module unload [name] - unloads a module
module help [name] - prints help for a module
module whatis [name] - prints info about the module
module purge - unload all modules
module swap [name1] [name2] - swap two modules


Modules can be loaded automatically in the .bashrc file.



Modules available on Kultrun


In Kultrun we have all the basics modules to compile standard codes, hdf5 library, openmpi, mpich, fftw2, fftw3 the gsl libraries, GNU and Intel compilers.
We can install new modules when needed. See here below a list of the current available modules.

-------------------------- /opt/modulos/modulefiles ----------------------------
ansys/20.1 fftw/3.3.8_intel_mpi gsl/2.4_intel mpich/1.5 python/3.5.6
casa/5.3.0-143.el7 fftw/3.3.8_openmpi hdf5/1.10.2 mpich/3.2.1 python/3.5.6_openmpi
fftw/2.1.5 gcc/4.9.4 hdf5/1.10.2_intel openmpi/1.10.7
fftw/2.1.5_intel gcc/5.5.0 intel/2018.3.222 openmpi/2.1.3
fftw/2.1.5_intel_mpi gildas/jun19c intel/impi-2018.3.222 openmpi/3.1.0
fftw/3.3.8 gsl/2.4 mercurial/4.6.1 python/2.7.16_intel

Typical batch scripts for Kultrun

To prepare job scripts you need so called "directives" and Slurm options.
Check it here for a comprehensive list.

MPI script INTEL env


#!/bin/bash
#SBATCH --job-name=test
#SBATCH --partition=mapu
#SBATCH -N 4
#SBATCH --ntasks-per-node=32

module load intel/2018.3.222
module load gsl/2.4_intel
module load hdf5/1.10.2_intel
module load fftw/2.1.5_intel

cd /home/user1/run_dir
srun --mpi=pmi2 ./code.exe input > output



MPI script GNU env


#!/bin/bash
#SBATCH --job-name=test_name
#SBATCH --partition=kutral_amd
#SBATCH -N 1
#SBATCH --ntasks-per-node=32

module load openmpi/2.1.3

cd /home/user2/run_dir

mpiexec ./code input > ouput



MPI script on scratch dir


#!/bin/bash
#SBATCH --job-name=test_name
#SBATCH --partition=kutral_amd
#SBATCH -N 1
#SBATCH --ntasks-per-node=32

module load openmpi/2.1.3

cd $SLURM_SUBMIT_DIR
export SCRDIR=/scratch/${SLURM_JOB_ID}
mkdir $SCRDIR
cp -rp * $SCRDIR/
cd $SCRDIR
mpiexec ./code input > ouput
cp -rp * $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
rm -rf $SCRDIR



Hybrid (MPI+OpenMP)


#!/bin/bash
#SBATCH --job-name=hybrid
#SBATCH --output=hydrid_job.txt
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=5
#SBATCH --nodes=2

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

mpirun hello_hybrid.mpi



It is important to note that the usage of mpirun and mpiexec is identical.
On the opposite, for intel applications we strongly suggest to use srun with the option --mpi=pmi2 to let SLURM set the right environment (more here).