ᛆᛚᚡᛁᛋ (Alvis)🔗
The Alvis cluster is a national NAISS resource dedicated for Artificial Intelligence and Machine Learning research. The system is built around Graphical Processing Units (GPUs) accelerator cards, and consists of several types of compute nodes with multiple NVIDIA GPUs. The system is divided in phases were Phase I is going into production in the summer of 2020. Project applications will open mid August 2020 in SUPR, see Getting Access.
For more information on using Alvis, see documentation on this site, in particular the parts on Machine Learning, Data sets, Containers and HPC and AI software.
Alvis is also available from an Open OnDemand web portal at https://portal.c3se.chalmers.se. For more information see the Alvis OnDemand documentation.
Etymology; Alvis is an old Nordic name meaning "all-wise" written as ᛆᛚᚡᛁᛋ in medieval viking runes.
Queue🔗
Below shows the current availability of resources in the queue on the login node (link):
Queue information is only accessible from within SUNET networks (use of VPN is necessary if you are outside).
Hardware🔗
Overview🔗
The main alvis
partition has:
#nodes | CPU | #cores | RAM (GB) | TMPDIR (GB) | GPUS | Note |
---|---|---|---|---|---|---|
12 | Skylake | 16 | 768 | 387 | 2xV100 | |
5 | Skylake | 32 | 768 | 387 | 4xV100 | |
19 | Skylake | 32 | 576 | 387 | 8xT4 | |
1 | Skylake | 32 | 1536 | 387 | 8xT4 | |
1 | Skylake | 32 | 768 | 1680 | NOGPU | |
83 | Icelake | 64 | 256 | 814 | 4xA40 | No IB |
54 | Icelake | 64 | 256 | 141 | 4xA100 | Fast Mimer |
20 | Icelake | 64 | 512 | 141 | 4xA100 | Fast Mimer |
8 | Icelake | 64 | 1024 | 141 | 4xA100fat | Fast Mimer |
- A100fat are A100 gpus with 80GB VRAM.
- A100 nodes have small
$TMPDIR
is compensated by the 100GBit InfiniBand connection to Mimer.
Login nodes🔗
- Login node
alvis1.c3se.chalmers.se
:- 4 x NVIDIA Tesla T4 GPU with 16GB RAM
- 2 x 16 core Intel(R) Xeon(R) Gold 6226R (Skylake) CPU @ 2.90GHz (total 32 cores)
- 768GB DDR4 RAM
- Login/data transfer node
alvis2.c3se.chalmers.se
:- No GPUs
- 2 x Intel(R) Xeon(R) Gold 6338 CPU (Icelake) @ 2.00GHz (total 64 cores)
- 256GB DDR4 RAM
- 2x100 GbitE internet connection via SUNET
- 2x100 Gbit InfiniBand to Mimer
Phase Ia🔗
12 high-performance GPU compute nodes alvis1-01
to alvis1-12
with the node configuration
- 2 x NVIDIA Tesla V100 SXM2 GPU with 32GB RAM, connected by nvlink
- 2 x 8 core Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz (total 16 cores)
- 768GB DDR4 RAM
- 387GB SSD scratch disk
5 high-performance GPU compute nodes alvis1-13
to alvis-17
with the node configuration
- 4 x NVIDIA Tesla V100 SXM2 GPU with 32GB RAM, connected by nvlink
- 2 x 16 core Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz (total 32 cores)
- 768GB DDR4 RAM
- 387GB SSD scratch disk
Phase Ib🔗
20 capacity GPU compute nodes alvis2-01
to alvis2-20
with the node configuration
- 8 x NVIDIA Tesla T4 GPU with 16GB RAM
- 2 x 16 core Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz (total 32 cores)
- 576GB DDR4 RAM (1 node with 1536GB)
- 387GB SSD scratch disk
Phase Ic🔗
1 compute node without GPUs alvis-cpu1
with the node configuration
- 2 x 16 core Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz (total 32 cores)
- 768GB DDR4 RAM
- 3.4TB SSD scratch disk
Note that there is only one of this type of node. It is suitable for the more
heavy duty pre- and postprocessing steps that do not require a GPU. To use the
node specify the constraint -C NOGPU
to SLURM.
Phase II🔗
Phase II is available from early 2022 and added:
Data transfer node - 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz - 256GiB RAM
85 nodes optimised for inference and smaller training jobs
- 4 x NVIDIA Tesla A40 GPU with 48GB RAM
- 2 x 32 core Intel(R) Xeon(R) Gold 6338 CPU @ 2GHz (total 64 cores)
- 256GiB DDR4 RAM
56 nodes optimised for training jobs
- 4 x NVIDIA Tesla A100 HGX GPU with 40GB RAM
- 2 x 32 core Intel(R) Xeon(R) Gold 6338 CPU @ 2GHz (total 64 cores)
- 256GiB DDR4 RAM
20 nodes optimised for training jobs with a bit more memory needs
- 4 x NVIDIA Tesla A100 HGX GPU with 40GB RAM
- 2 x 32 core Intel(R) Xeon(R) Gold 6338 CPU @ 2GHz (total 64 cores)
- 512GiB DDR4 RAM
8 nodes optimised for heavy training jobs
- 4 x NVIDIA Tesla A100 HGX GPU with 80GB RAM
- 2 x 32 core Intel(R) Xeon(R) Gold 6338 CPU @ 2GHz (total 64 cores)
- 1024GiB DDR4 RAM
4 nodes without GPUs privately owned by CHAIR
- 2 x 32 core Intel(R) Xeon(R) Gold 8358 CPU @ 2.6GHz (total 64 cores)
- 512GiB DDR4 RAM
Dedicated storage🔗
In addition to the compute nodes listed above, together with phase II a fast ~0.6PB dedicate all-flash storage solution was installed in Alvis. The solution will be backed by ~7PB of bulk storage.
More details on the storage solution can be found on this page.
GPU cost on Alvis🔗
Depending on which GPU type you choose for your job, an hour of on the GPU will have different costs according to the following table:
Type | VRAM | System memory per GPU | CPU cores per GPU | Cost |
---|---|---|---|---|
T4 | 16GB | 72 or 192 GB | 4 | 0.35 |
A40 | 48GB | 64 GB | 16 | 1 |
V100 | 32GB | 96 or 192 GB | 8 | 1.31 |
A100 | 40GB | 64 or 128 GB | 16 | 1.84 |
A100fat | 80GB | 256 GB | 16 | 2.2 |
NOGPU | N/A | N/A | N/A | 0.05 |
- Example: using 2xT4 GPUs for 10 hours costs 7 "GPU hours" (2 x 0.35 x 10).
- The cost reflects the actual price of the hardware (normalised against an A40 node/GPU).
- The cost for the NOGPU nodes is per core-hour.
More info🔗
To get started look through the introduction slides for Alvis, and the general user documentation on this site.