Log in

Kultrun is accessed via secure shell (ssh). The first step to access the cluster is requesting an account.
The cluster is hosted at the Department of Astronomy, domain: kultrun.astro-udec.cl.

Schedule a job

Kultrun is managed via a system called SLURM, which has three key functions: i) it allocates access to resources (compute nodes) to users, ii) it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes, and iii) it manages the queue system.

Jobs can be run in two ways. For testing and small jobs you can run a job interactively. This way you can directly interact with the compute node(s) in real time to make sure your jobs will behave as expected. The other way, for large and long-running jobs, involves preparing a job submission script and submitting that to the queue system.

Kultrun includes three queues which are different for architecture, number of cores, and processors. Every queue has been assigned a specific name refered to the Mapuche culture. Every user should choose the queue based on the needed resources and the type of code employed. The features and the name of each queue are reported here below.




mapu

The distributed memory cluster: 18 intel nodes for a total of 576 cores. Mapu represents the Earth (la tierra) in Mapuche language. It is intended for standard parallel calculations.

More on the nodes

ko_sm

The shared memory queue. 224 cores (last generation intel processors). For very intensive calculations. The name ko represents the water in Mapuche language.

More on the node

kutral_amd

2 AMD nodes, with a total of 64 cores. Queue intended for short test runs, and serial/interactive jobs. Kutral represents the fire (el fuego) in the Mapuche language.

More on the node

Common Slurm Commands


Submit a job script to the queue system

sbatch (qsub)


List queued and running jobs

squeue (qstat)


Cancel a queued job or kill a running job

scancel (qdel) job_id


Check status of individual job (including failed or completed)

sacct -j job_id




Interactive jobs


To run an interactive job

srun -N 1 --ntasks-per-node=32 --partition=kutral_amd --pty bash

which will run an interactive job on one node of the queue "kutral_amd", with a total of 32 cores.



Batch scripts


To submit a job via Slurm, you first need to prepare a submission script. This is usually comprised of three parts:


Here following some typical directives (more here)

#!/bin/bash
#SBATCH -J [jobname]
#SBATCH -t=[HH]:[MM]:[SS]
#SBATCH -l mem=[MM][kb/mb/gb/tb]
#SBATCH -p [partitionNAME]
#SBATCH --mail-type=[flags (NONE, BEGIN, END, FAIL, REQUEUE, ALL)]
#SBATCH --mail-user=[user]
#SBATCH -e [name]
#SBATCH -o [name]
#SBATCH --tasks-per-node=[cores]




Find here some typical job scripts for Kultrun.