Containers🔗
Containers lets you define your own environment and makes your work portable and mostly reproducible on any HPC system that supports it. We provide the option to Apptainer (and open fork of Singularity) which intentionally brings in more information from the system to make containers work well for HPC environments.
Apptainer has extensive documentation for helping you build and use containers https://apptainer.org/docs/ As of the upgrade to Apptainer it is now possible to also build your containers directly on the cluster login nodes, but you can also easily install and use Apptainer on your own linux machine (or a linux VM or via WSL2) and transfer the container images to the cluster.
Our main repository of containers can be found on the clusters under /apps/containers/
and on github https://github.com/c3se/containers/ which can be used as-is or as a basis and inspiration for new container definitions.
Building containers🔗
For complete instructions see https://apptainer.org/docs/user/latest/.
Building a container can be done on your own linux machine, or directly on the login nodes.
A simple definition file based on the miniconda docker image and a local requirements.txt
file could be:
Bootstrap: docker
From: quay.io/condaforge/miniforge3:latest
#From: quay.io/condaforge/miniforge3:24.3.0-0
%files
requirements.txt
%post
/opt/conda/bin/conda install -y --file requirements.txt
apptainer build my_container.sif my_recipe.def
You can find more example definition files on https://github.com/c3se/containers/
In case you have an environment.yml
file instead of a requirements file you would do:
Bootstrap: docker
From: quay.io/condaforge/miniforge3:24.3.0-0
%files
environment.yml
%post
/opt/conda/bin/conda env update --name base --file environment.yml --prune
Once a container is built, its definition file can be checked by apptainer inspect --deffile someImage.sif
Extending images at /apps/containers🔗
Many times you may want more packages than what is contained under /apps/containers
.
In those cases you can bootstrap from these images and add what is missing in the %post step.
For example: Adding a package to an existing miniconda container image.
We prepare a recipe file my_recipe.def
Bootstrap: localimage
From: /apps/containers/Conda/miniforge-24.3.0-0.sif
%post
conda install -y python-dateutil
apptainer build my_container.sif my_recipe.def
Hint: use
apptainer shell --fakeroot --writable-tmpfs /apps/containers/path-to-local-image.sif
Using overlays for persistent storage🔗
As of the update to Apptainer we no longer recommend using overlays.
If you still want to use them note:
- Using overlays stored on Mimer does not work due to filesystem limitations.
- An overlay will generally only work in conjunction with the exact original container and nothing else.
- Whenever using overlays add the ':ro' to make them read-only. E.g.
apptainer exec --overlay overlay.img:ro container.sif my_command
- Whenever writing to overlays
- The permissions need to be correct in base container with
chmod
in recipe or--fix-perms
to Apptainer build command. - In Apptainer
--fakeroot
is needed to make changes to overlay image
- The permissions need to be correct in base container with
Using containers🔗
There are a few ways to use the prepared container:
apptainer exec someImage.sif /path/to/binary/inside/container
--> runs the binary inside the containerapptainer run someImage.sif
--> executes the runscript embedded inside the containerapptainer shell someImage.sif
--> gives an interactive shell inside the container
It is also straight forward to use a container in a job script, and requires no special steps:
#!/bin/bash
#SBATCH -n 1
#SBATCH -t 0:30:00
#SBATCH -A **your-project** -p **your-partition**
echo "Outside of apptainer, host python version:"
python --version
apptainer exec ~/ubuntu.img echo "This is from inside a container. Check python version:"
apptainer exec ~/ubuntu.img python --version
MPI🔗
The are two general approaches to running a containerized application across a multi-node cluster:
- Packaging the MPI program and the MPI library iside the container, but keeping the MPI runtime outside on the host
- Packaging the MPI runtime also inside the container leaving only the communication channel on the host
In the first approach, the mpirun
command runs on the host (host-based MPI runtime), e.g.,
mpirun apptainer run myImage.sif myMPI_program
In the second approach, the MPI launcher is called from within the container (image-based MPI runtime); therefore, it can even run on a host system without an MPI installation, e.g.,
apptainer run myImage.sif mpirun myMPI_program
apptainer run myImage.sif mpirun --launch-agent 'apptainer run myImage.sif orted' myMPI_program
GPU🔗
To access the GPU, Apptainer exposes the system NVidia drivers and libraries into the container space using via the --nv
flag, e.g:
apptainer exec --nv my_image.sif my_gpu_app
When running graphical applications that need 3D acceleration on the GUI machines, you need to combine this with VirtualGL which needs to be installed into the container image:
apptainer exec --nv my_image.sif vglrun my_gui_app
On Alvis, the flag '--nv' is configured to be used by default.
Using modules inside your container🔗
If you need to import additional paths into your container using the APPTAINERENV_
prefix.
This is in particular useful with the PATH
and LD_LIBRARY_PATH
which are for technical reasons cleared inside the container environment.
module load MATLAB
export APPTAINERENV_PATH=$PATH
export APPTAINERENV_LD_LIBRARY_PATH=$LD_LIBRARY_PATH
apptainer exec ~/ubuntu.sif matlab -nodesktop -r "disp('hello world');"
However, note that is it very easy to break other software inside your container by importing the host's PATH
and LD_LIBRARY_PATH
into your container.
In addition, any system library that the software depends on needs to be installed in your container.
E.g. you can not start MATLAB if there is no X11 installed, which is typically not done when setting up a small, lean, Apptainer image.
Thus, if possible, strive to call modules from outside your container unless you have a special need, e.g:
apptainer exec ~/ubuntu.sif run_my_program simulation.inp
module load MATLAB
matlab < post_process_results.m
Using bind mounting🔗
Sometimes, you may wish to bind-mount a particular path on the system to a path in your container. For example, you might want to map the system path /tmp
to /scratch
in the container. This is done by specifying the bind-mount as a command-line argument, in this case
apptainer exec -B /tmp:/scratch container_image.sif
You should now have a directory called /scratch
inside your image. When you write to this directory, the contents end up in the system directory /tmp
.
The $HOME
directory of the host system and the current working directory are mounted by default. A large number of paths are also bind-mounted by default, including /cephyr
, /mimer
, /apps
. To override this behaviour, use --contain
:
apptainer exec --contain container_image.sif
Now you can verify that a default directory that would otherwise be bind-mounted to the container is no longer bind-mounted, such as /apps
:
Apptainer> ls /apps
ls: cannot access '/apps': No such file or directory
Online resources🔗
- Apptainer documentation https://apptainer.org/docs/
- Nvidia's NGC catalogue including an up-to-date plethora of HPC/AI/Visualization container images verified by Nvidia: https://catalog.ngc.nvidia.com
- Open Containers Image Specifications: https://github.com/opencontainers/image-spec
- Docker's library of container images: https://hub.docker.com/