Containers¶
Last updated/reviewed: 2025-04-29
Containers lets you define your own environment and makes your work portable and mostly reproducible on any HPC system that supports it. We provide the option to Apptainer (and open fork of Singularity) which intentionally brings in more information from the system to make containers work well for HPC environments.
Apptainer has extensive documentation for helping you build and use containers https://apptainer.org/docs/ As of the upgrade to Apptainer it is now possible to also build your containers directly on the cluster login nodes, but you can also easily install and use Apptainer on your own linux machine (or a linux VM or via WSL2) and transfer the container images to the cluster.
Our main repository of containers can be found on the clusters under
/apps/containers/
and on github https://github.com/c3se/containers/ which
can be used as-is or as a basis and inspiration for new container definitions.
Building containers¶
For complete instructions see https://apptainer.org/docs/user/latest/.
Building a container can be done on your own linux machine, or directly on the
login nodes. A simple definition file based on the miniforge docker image and a
local requirements.txt
file could be:
Bootstrap: docker
From: quay.io/condaforge/miniforge3:latest
#From: quay.io/condaforge/miniforge3:24.11.3-0
%files
requirements.txt
%post
/opt/conda/bin/conda install -y --file requirements.txt
and can be built with the command
You can find more example definition files on https://github.com/c3se/containers/
In case you have an environment.yml
file instead of a requirements file you
would do:
Bootstrap: docker
From: quay.io/condaforge/miniforge3:24.11.3-0
%files
environment.yml
%post
/opt/conda/bin/conda env update --name base --file environment.yml --prune
instead.
Once a container is built, its definition file can be checked by
apptainer inspect --deffile someImage.sif
Extending images at /apps/containers¶
Many times you may want more packages than what is contained under
/apps/containers
. In those cases you can bootstrap from these images and add
what is missing in the %post step.
For example: Adding a package to an existing miniforge container image. We
prepare a recipe file my_recipe.def
Bootstrap: localimage
From: /apps/containers/Conda/miniforge-24.11.3-0.sif
%post
conda install -y python-dateutil
and build it
Hint: use
to make temporary changes to the localimage and figure out what to put in the recipe.
Note
For writable-tmpfs or overlays to work with a container, the TMPDIR used when building the container can't have been on NFS.
Using overlays for persistent storage¶
As of the update to Apptainer we no longer recommend using overlays.
If you still want to use them note:
- Using overlays stored on Mimer does not work due to filesystem limitations.
- An overlay will generally only work in conjunction with the exact original container and nothing else.
- Whenever using overlays add the ':ro' to make them read-only. E.g.
apptainer exec --overlay overlay.img:ro container.sif my_command
- Whenever writing to overlays
- The permissions need to be correct in base container with
chmod
in recipe or--fix-perms
to Apptainer build command. - In Apptainer
--fakeroot
is needed to make changes to overlay image
Using containers¶
There are a few ways to use the prepared container:
apptainer exec someImage.sif /path/to/binary/inside/container
--> runs the binary inside the containerapptainer run someImage.sif
--> executes the runscript embedded inside the containerapptainer shell someImage.sif
--> gives an interactive shell inside the container
It is also straight forward to use a container in a job script, and requires no special steps:
#!/bin/bash
#SBATCH -n 1
#SBATCH -t 0:30:00
#SBATCH -A **your-project** -p **your-partition**
echo "Outside of apptainer, host python version:"
python --version
apptainer exec ~/ubuntu.img echo "This is from inside a container. Check python version:"
apptainer exec ~/ubuntu.img python --version
MPI¶
The are two general approaches to running a containerized application across a multi-node cluster:
- Packaging the MPI program and the MPI library iside the container, but keeping the MPI runtime outside on the host
- Packaging the MPI runtime also inside the container leaving only the communication channel on the host
In the first approach, the mpirun
command runs on the host (host-based MPI
runtime), e.g.,
This fits perfectly with the regular workflow of submitting jobs on the HPC clusters, and is, therefore, the recommended approach. However, there is one thing to keep in mind: The MPI runtime on the host needs to be able to communicate with the MPI library inside the container; therefore, i) there must be the same implementation of the MPI standard (e.g. OpenMPI) inside the container, and, ii) the version of the two MPI libraries should be as close to one another as possible to prevent unpredictable behaviour (ideally the exact same version).
In the second approach, the MPI launcher is called from within the container (image-based MPI runtime); therefore, it can even run on a host system without an MPI installation, e.g.,
Everything works well on a single node. There's a problem though: as soon as the launcher tries to spawn into the second node, the ORTED process crashes. The reason is it tries to launch the MPI runtime on the host and not inside the container. The solution is to have a launch agent do it inside the container. With OpenMPI, that would be:
GPU¶
To access the GPU, Apptainer exposes the system NVidia drivers and libraries
into the container space using via the --nv
flag, e.g:
Note: We have configured this option on by default on all nodes with GPUs.
When running graphical applications that need 3D acceleration on the GUI machines, you need to combine this with VirtualGL which needs to be installed into the container image:
On Alvis, the flag '--nv' is configured to be used by default.
Using modules inside your container¶
Important
Avoid mixing modules with containers if at all possible. They can never be assumed to be compatible and the resulting combination will typically just crash. There are only a few exceptional cases where this will work.
If you need to import additional paths into your container using the
APPTAINERENV_
prefix. This is in particular useful with the PATH
and
LD_LIBRARY_PATH
which are for technical reasons cleared inside the container
environment.
module load MATLAB/2024a
export APPTAINERENV_PATH=$PATH
export APPTAINERENV_LD_LIBRARY_PATH=$LD_LIBRARY_PATH
apptainer exec ~/ubuntu.sif matlab -nodesktop -r "disp('hello world');"
However, note that is it very easy to break other software inside your
container by importing the host's PATH
and LD_LIBRARY_PATH
into your
container. In addition, any system library that the software depends on needs to
be installed in your container. E.g. you can not start MATLAB if there is no X11
installed, which is typically not done when setting up a small, lean, Apptainer
image. Thus, if possible, strive to call modules from outside your container
unless you have a special need, e.g:
apptainer exec ~/ubuntu.sif run_my_program simulation.inp
module load MATLAB/2024a
matlab < post_process_results.m
Using bind mounting¶
Sometimes, you may wish to bind-mount a particular path on the system to a path
in your container. For example, you might want to map the system path /tmp
to
/scratch
in the container. This is done by specifying the bind-mount as a
command-line argument, in this case
You should now have a directory called /scratch
inside your image. When you
write to this directory, the contents end up in the system directory /tmp
.
The $HOME
directory of the host system and the current working directory are
mounted by default. A large number of paths are also bind-mounted by default,
including /cephyr
, /mimer
, /apps
. To override this behaviour, use
--contain
:
Now you can verify that a default directory that would otherwise be bind-mounted
to the container is no longer bind-mounted, such as /apps
:
Online resources¶
- Apptainer documentation https://apptainer.org/docs/
- Nvidia's NGC catalogue including an up-to-date plethora of HPC/AI/Visualization container images verified by Nvidia: https://catalog.ngc.nvidia.com
- Open Containers Image Specifications: https://github.com/opencontainers/image-spec
- Docker's library of container images: https://hub.docker.com/