MATLAB

Matlab is installed in various version on all our clusters. Use module avail to see which versions are currently available.

Parallel Toolbox Example

Chalmers has a license for the Parallel Computing Toolbox, which can be used for running parallel jobs on a compute node. The toolbox does not allow multi-node jobs. In versions older than R2014a, there was a limit to 12 parallel worker processes, but in the newer versions, that limit has been removed, however, if you don't specify a parcluster, MATLAB will only use 12 workers. You should always verify that you get the correct results by checking the load of the node, and the output in the log files. Matlab will write the number of workers used in the standard output.

To set up a run on one node on Hebbe, use code like the following:

sched = parcluster('local');
sched.JobStorageLocation = getenv('TMPDIR');
parpool(sched, sched.NumWorkers)

This initializes the Parallel Computing Toolbox on the local node, sets the data storage location to the local disk on the node, $TMPDIR, and starts as many worker processes as there are cores available. You can then make use of `parfor` in your matlab code.

You can also override the number of workers (processes) matlab will use. If you use the local cluster, NumWorkers is set to the number of requested cores. This can be overridden with

sched.NumWorkers = 5;

before you call parpool. Be aware that Hebbe lets you share nodes, so you can request less than 20 cores to start with, if you don't need the full node for the memory consumption.

Job array Example

If you want to run many smaller jobs, you can use a Slurm job array. This allows each simulation to start as soon as there is any available core on the cluster, instead of waiting for one or more whole nodes to start running.

You start a slurm array using batch:

sbatch --array=0-10 array_script.sh

where "array_script.sh" could look like

#!/usr/bin/env bash
#SBATCH -A MY-PROJECT
#SBATCH -p hebbe
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -t 01:00:00
module load MATLAB
cp input_file_${SLURM_ARRAY_TASK_ID}.mat $TMPDIR
cd $TMPDIR
matlab -nodesktop -singleCompThread -r  "run_sim(${SLURM_ARRAY_TASK_ID})" < /dev/null
cp results_${SLURM_ARRAY_TASK_ID}.mat $SLURM_SUBMIT_DIR

where ${SLURM_ARRAY_TASK_ID} is the array index that goes from 0 to 10 in the example above.

Adding > /dev/null ensures that matlab closes if your code gives raise to an error or when the simulation ends.

You can also access the array index from within your matlab code:

sim_number = getenv('SLURM_ARRAY_TASK_ID');

Matlab randomly crashing at startup

Several users have experienced problems where matlab refuses to start when many jobs are sent at once. We can unfortunately only offer a workaround if you experience this problem. We have therefor introduced a helper script "RunMatlab.sh" which restarts matlab until it works. Examples:

module load MATLAB
RunMatlab.sh -f my_script.m
# or
RunMatlab.sh -o "-nodesktop -singleCompThread -r \"my_function(123);\""

The script is located at /apps/Common/Core/MATLAB/2016b/bin/RunMatlab.sh