Building your own software🔗
Modern compilers and development tools are available through the module system. It is highly recommended to always load a toolchain module, even if you are just using GCC, as the system compiler is very dated.
Intel compiler suite🔗
The intel
compiler toolchain includes:
- icpc/icpx - C++ compiler
- icc/icx - C compiler
- ifort/ifx - FORTRAN
- imkl - Intel Math Kernel Library (BLAS, LAPACK, FFT, etc.)
- impi - Intel MPI
Exactly how to instruct a build system to use these compilers varies from software to software.
In addition some tools are also available:
- VTune - Visual profiling tool
- Advisor - Code optimisation tool
- Inspector - Memory and thread error detection tool
all of which you can find in the menu when logging in with over remote graphics.
GCC🔗
The foss
compiler toolchain includes:
- g++ - C++ compiler
- gcc - C compiler
- gfortran - Fortran compiler
- OpenBLAS - Efficient open source BLAS and LAPACK library
- OpenMPI
Using a compiler with a build system🔗
Using the buildenv
modules will set a bunch of useful environment variables, like CC
CXX
etc. which will be picked up by most build systems, along with flags that enable higher optimizations which may also be picked up.
Otherwise, there is a risk that the very old system compilers are picked up.
These will point towards the very old system compilers.
E.g.
module load buildenv/default-foss-2022a-CUDA-11.7.0
module load CMake/3.23.1-GCCcore-11.3.0
module load HDF5/1.13.1-gompi-2022a
cd my_software/
mkdir build
cd build
cmake ../
However, some software unfortunately relies on custom made build tools and instructions which makes things more difficult and may require custom solutions.
Additional libraries🔗
We install many libraries which can greatly simplify
Loading modules will set the CPATH
and LIBRARY_PATH
environment variables, which are usually picked up popular build systems.
However, many build systems will fail to respect these general rules, and may require some tweaking to build correctly.
Every library is not installed for every toolchain version. If you are missing some dependency for your software, you can request an installation, or install it locally.
Building CUDA code🔗
If you compile code for GPU:s using for example nvcc
you must be aware that you need to make sure you not only build for the type
of GPU on the system you are compiling with, but also other parts of the resource.
To do this please add the flags -gencode=arch=compute_XX,code=sm_XX
for each compute capability XX
you want to support to the nvcc
commands.
- P2000:
61
(only used on login nodesvera1
andvera2
) - V100:
70
- T4:
75
- A100:
80
- A40:
86
See the CUDA best practices guide for more information.