Getting project information

First you need to figure out in which Project that your jobs should be run. Every Project has a given computer time allocation that is measured in core-hours per month. This information can be obtained from the projinfo program. Run projinfo on the login node and study the output:

[emilia@hebbe ~]$ projinfo
Running as user: emilia
Project                 Used[h]        Allocated[h]       Queue
SNIC001-23-456         51734.61               55000       hebbe
   f38anlo             49104.36
   emilia               2308.91
   emil                  321.34
SNIC009-87-654             7.12              100000       hebbe
   fantomen                7.12

In this example we are logged in as the user emilia that is a member of two projects, SNIC001-23-456 and SNIC009-87-654 that have monthly time allocations of 55.000 and 100.000 core-hours/month respectively. The Used column shows the total use since the beginning of the month in total, and divided up on different members of each project. As most of the current months allocation in the SNIC001-23-456 project already has been used by the user f38anlo we decide to submit our jobs to the project SNIC009-87-654 with almost no previous usage.

Writing a job script

We are now ready to put together a toy example on a batch job. First we need to create a plain text file containing a valid shell script that executes the calculation we want to perform on the cluster.

To work with text files on the cluster you can use any of the installed text editors, gedit, vi, emacs, nano etc.. Pick one and learn how to use it, in this example we will use nano.

First we create a job script file that is going to specify our batch job, lets simply call it jobscript

[emilia@hebbe ~]$ nano jobscript

In the job script file we need to specify all information needed by the scheduler to execute the job. Below are minimal examples for both Hebbe and Glenn.

Hebbe example:

#!/usr/bin/env bash
#SBATCH -A C3SE2018-1-1 -p hebbe
#SBATCH -n 10
#SBATCH -t 0-00:10:00

echo "Hello cluster computing world!"
sleep 60 

Vera example:

#!/usr/bin/env bash
#SBATCH -A C3SE2018-1-1 -p vera
#SBATCH -n 8
#SBATCH -t 0-00:10:00

echo "Hello cluster computing world!"
sleep 60 

write it in the editor (or copy and paste it) and save (using Ctr-o in nano) and exit (Ctr-x). But more importantly is to figure out what the job file content means.

The first row is just a bash script "Shebang".

The rows that start with either #SBATCH are special commands given directly to the scheduler in the order above they are

  1. Specification of under which project the job is to be accounted, here SNIC009-87-654
  2. The queue or partition that the job should be scheduled to. Here we use the default partition on Hebbe and Glenn that have the same name as the cluster.
  3. Job size, or number of requested compute nodes. Here we just request a single compute node on glenn which gives us all 16 cores. However, on hebbe you only get 1 core by default, so here we have requested 10 cores (out of a possible 20).
  4. Job walltime, here we have to specify the maximum time that the job should be let to run. If the job does not end within the specified time it will be killed by the scheduler. Here we requested 0 days, 10 minutes.

Everything after the scheduler information is the actual script that will be executed on the compute nodes. In this example we simply write some output and wait for 60 seconds. This is the part you later will want to modify and instead the run program or calculation you are interested in.

There are many more flags you can give to sbatch (

Submitting and monitoring jobs

So now we've managed to write a job script and store it to a file called jobscript. To run the job you now want to send it to the Scheduler. For this we use the command sbatch on Glenn.

[emilia@hebbe ~]$ ls jobscript
[emilia@hebbe ~]$ sbatch jobscript
Submitted batch job 123456

The job is now sent to the Scheduler that will put in the job queue of the cluster. The number that was printed when we submitted the job is called the JobID and is a unique identifier for each job. To monitor the status of our job we ask for information from the queue using another commando:

[emilia@hebbe ~]$ squeue -u emilia
 123456     hebbe  JobTest   emilia   R       0:07      1 hebbe06-2

Note that even more useful information can be obtained using the command jobinfo -u emilia instead of squeue.

Decoding the information we see that the job is already running, as seen from the status flag that is set to R, looking at the elapsed time the job was actually started 7 seconds ago. In general there will not be available compute nodes at the time you submit a job and then the job will be put in queued status until it is started by the Scheduler. At any point it is possible to kill the job (it is not uncommon to realize a mistake only after submitting a job). To kill a job you just have to know its JobID (that we know from above) and run the command:

[emilia@hebbe ~]$ scancel 123456
[emilia@hebbe ~]$

Try this out a couple of times on your own,

  1. Submit the job
  2. Look at the queue and the job status
  3. Cancel the job
  4. Look at the queue and make sure the job has disappeared
  5. Submit the job
  6. Look at the queue and let the job run to completion (it will take roughly 60 seconds)

When our job is done executing and has disappeared from the queue listing we are now ready to look at the results.

[emilia@hebbe ~]$ ls jobscript
[emilia@hebbe ~]$ cat slurm-123456.out
Hello cluster computing world!
[emilia@hebbe ~]$

Our job has now created a new file slurm-62341.out and output.stdout. We could redirect the output to a different file using #SBATCH -o jobscript.

So what is the practical use of all this? Our little toy example is just the necessary steps we need to do some real computational work all you need to do is to modify the submit script as to run the calculation of your choice.

If you are interested or need to know more, eg. exactly what the output of the queue listing means, this can be found in the manual pages of each command. The manual pages are available directly on the command-line by running man command, just replace command with e.g. squeue

Memory and other node features

In order to request nodes with special features, for example nodes with more memory, GPUs or limit multi-node jobs to run within the same infiniband-switch, you can use the --constraint or equivalently -C flag with sbatch. E.g:

#SBATCH -C MEM128           # a 128GB node will be allocated
#SBATCH -C MEM512|MEM1024   # either 512GB or 1024GB RAM will be allocated
#SBATCH -C MEM128|MEM64*2   # only nodes with 128GB RAM connected to infiniband switch 8

Note: Not all combinations of constraints are available. Though rarely ever used, you can use complex logic and node counts when specifying constraints. See for details on the -C flag.

The set of features to pick from is as listed:

Resource Infiniband network Memory Other
Hebbe IB-SWxx MEM64 MEM128 (MEM256) MEM512 MEM1024 GPU
Vera IB-SWxx MEM96 MEM192 (MEM384) MEM768 25G

The 25G marks nodes equipped with a 25Gbit/s connection, for faster internet and centre storage connection. Note that all nodes have fast infiniband for the MPI communication.

Note: Unless you know you have special requirements, do not specify any constraints on your job.

Requesting more memory or other features does not cost any more core-hours than other nodes, but you will have to wait longer in the queue for the particular nodes to become available.


On Hebbe, you specify the -C GPU to obtain a node with NVidia K40 GPU (exclusively), but, as Vera has 2 GPUs per node, we use Slurms generic resource management to allocate them. So on Vera, you can use:

#SBATCH --gres=gpu:1   # allocates 1 GPU (and half the node)

Running job-arrays

There is often a need to run a series of similar simulations with varying inputs. For this purpose, there exist a job-array feature in SLURM. Using the --array flag for sbatch introduces a new environment variable $SLURM_ARRAY_TASK_ID which is used by the script to determine what simulation to run.

This offer several great advantages:

  1. It's less work for you (no need to generate and modify tons of different job-scripts that are almost identical).
  2. The squeue command is more readable for everyone (the whole array is only 1 entry)
  3. It's easier for support to help you.
  4. The scheduler isn't overloaded. For safety, there is a max-size of the queue, so very large submissions *must* use arrays to avoid hitting this limit.
  5. Convenient to cancel every job in a given array if you discover some mistake.
  6. Email notifications can be customized to be sent only when all jobs have finished.
  7. It's vastly simpler to re-run an aborted simulation (for example when a NODE_FAIL occurs). E.g. if job 841342_3 dies, then:
sbatch --array=3

Example: We have input files named,, ...,, which we want to run on 5 cores each. Our input file might then look something like this:

#!/usr/bin/env bash
#SBATCH -A SNIC009-87-654
#SBATCH -p hebbe
#SBATCH -J CrashBenchmark
#SBATCH -n 5
#SBATCH -t 0-04:00:00

module load intel

echo "Running simulation on data_${SLURM_ARRAY_TASK_ID}.in"

mpirun ./my_crash_sim data_${SLURM_ARRAY_TASK_ID}.in

Example 2: We have directories that are not enumerated, e.g: "At", "Bi", "Ce", etc. located in input_data/ We need to run a 1 core simulation in each directory, so we could do:

#!/usr/bin/env bash
#SBATCH -A SNIC009-87-654
#SBATCH -p hebbe
#SBATCH -J CobaltDiffusion
#SBATCH -n 1
#SBATCH -t 0-20:00:00

module load intel

# Create a list of each directory:
DIRS=($(lfs find input_data/))
# (we could have also specified the list directly if we wanted: DIRS=(At Bi Ce)

# Fetch one directory from the array based on the task ID (index starts from 0)

echo "Running simulation $CURRENT_DIR"

# Go to folder

./diffusion_sim cobalt_data.inp

These scripts can both be submitted using the syntax

sbatch --array=0-10

For more examples and details, see the SLURM manual on job arrays:

If you are unsure how to make use of an job-array, please contact the support for help writing a suitable jobscript.