Vera¶

Queue¶
Below shows the current availability of resources in the queue on the login node (link):
Queue information is only accessible from within SUNET networks (use of VPN is necessary if you are outside).
Hardware¶
The Vera cluster contains several hardware models, and this page is updated to reflect the current state.
- Intel(R) Xeon(R) Gold 6338 CPU and Platinum 8358 (code-named "Icelake") CPUs.
- AMD EPYC 9354 Zen4 (code-named "Genoa") CPUs.
All nodes have dual CPU sockets and fast ethernet and Inifiniband network. Some nodes have A40, A100, or H100 NVidia GPUs
The main vera partition has:
| #nodes | CPU | #cores | RAM (GB) | TMPDIR (GB) | GPUS | Nodes |
|---|---|---|---|---|---|---|
| 95 | Zen4 | 64 | 768 | 845 | vera-r01-[16-24],vera-r[02-04]-[01-24],vera-r05-[04,06],vera-r06-[01-12] |
|
| 1 | Zen4 | 64 | 1536 | 845 | vera-r05-05 |
|
| 2 | Zen4 | 64 | 1536 | 845 | 4xH100 | vera-r05-[01-02] |
| 4 | Icelake | 64 | 512 | 837 | 4xA40 | vera-r07-[01-04] |
| 3 | Icelake | 64 | 512 | 837 | 4xA100 | vera-r07-[05-06],vera-r08-01 |
| 5 | Icelake | 64 | 512 | 837 | (expandable) | vera-r08-[02-06] |
| 52 | Icelake | 64 | 512 | 852 | vera-r08-[07-14],vera-r09-[01-20],vera-r10-[01-24] |
|
| 6 | Icelake | 64 | 1024 | 407 | vera-r07-[07-15] |
Login nodes are AMD Zen4 machines with 1536GB of RAM and are equipped with NVIDIA L40s for remote graphics and 100G Ethernet.
Several local research groups have also purchased private partitions with additional nodes. You can specific node information from slurm with:
The Icelake expansion has 25G Ethernet network for filesystem access and 100 Gbps Infiniband high-speed/low-latency network for parallel computations.
The Zen4 expansion has 25G Ethernet network for filesystem access and 200 Gbps Infiniband high-speed/low-latency network for parallel computations.
GPU cost on Vera¶
Jobs "cost" based on the number of physical cores they allocate, plus
| Type | VRAM | GPU Cost | Eqv. CPU cost | FP16 TFLOP/s | FP32 | FP64 |
|---|---|---|---|---|---|---|
| A40 | 45GiB | 1 | 16 | 37.4 | 37.4 | 0.58 |
| A100 | 40GiB | 2 | 48 | 77.9 | 19.5 | 9.7 |
| H100 | 94GiB | 5.5 | 160 | 248 | 62 | 30 |
- On the main
verapartition everything is expressed in core hours, and the A40 costs the equivalent of 16 additional CPU cores. - The A40 is the baseline cost of 1 GPU hour per hour, and the other GPUs cost proportionally more.
- Performance numbers are theoretical and real world performance may differ greatly. The new Ampere and Hopper GPUs performance is based on using "Tensor Cores" and reduced precision for best performance.
| #GPUs | GPUs | Capability | CPU |
|---|---|---|---|
| 16 | A40 | 8.6 | Icelake |
| 12 | A100 | 8.0 | Icelake |
| 8 | H100 | 9.0 | Zen4 |
- Example: A job using a full node with 4 A40 for 10 hours:
(64 + 16*4) * 10 = 1280core hours - Note: 16, 32, and 64 bit floating point performance differ greatly between these specialized GPUs. Pick the one most efficient for your application.
- Additional running cost is based on the price compared to a CPU node.
- You don't pay any extra for selecting a node with more memory; but you are typically competing for less available hardware.
Support¶
If you need some kind of support (trouble logging in, how to run your software, etc.) please first
- Contact the PI of your project and see if he/she can help
- Talk with your fellow students/colleagues
- Contact C3SE support
Vera allocations¶
Chalmers local allocations¶
All departments at Chalmers have the right to allocations on Vera. For those departments which not yet have any allocations, you should speak to your prefect who has been sent information.
Many departments have had their projected allocated already, and manage their members themselves via their selected Principal Investigators (PIs) in SUPR. Researchers should speak to their respective supervisor for access. Some departments decide that they prefer to have single large projects; all employees at that department should apply for that joint project listed below.
Below is a list of PIs with current allocations (C3SE20YY-1-XX projects):
- Architecture and Civil Engineering (ACE)
- Holger Wallbaum C3SE 2026/1-13
- Computer Science and Engineering (CSE)
- Miquel Pericas C3SE 2026/1-12
- Electrical Engineering (E2)
- Thomas Rylander C3SE 2026/1-20
- Physics
- Henrik Grönbeck C3SE 2026/1-19
- Industrial and Material Science (IMS)
- Martin Fagerström C3SE 2026/1-24
- Chemistry and Chemical Engineering
- Ronnie Andersson C3SE 2026/1-8
- Ergang Wang C3SE 2026/1-11
- Joakim Halldin Stenlid C3SE 2026/1-6
- Martin Rahm C3SE 2026/1-5
- Itai Panas (not applied for project yet)
- Alexander Giovannitti C3SE 2026/1-7
- Jia Wei Chew C3SE 2026/1-9
- Life Sciences
- Annikka Polster (not applied for project yet)
- Johan Bengtsson-Palme (not applied for project yet)
- Aleksej Zelezniak (not applied for project yet)
- Jens Nielsen (not applied for project yet)
- Eduard Kerkhoven (not applied for project yet)
- ChemBio - Johan Bengtsson-Palme (not applied for project yet)
- SysBio - Johan Bengtsson-Palme (not applied for project yet)
- FNS group - Clemens Wittenbecher (not applied for project yet)
- Mathematical Sciences
- Tobias Gebäck C3SE 2026/1-16
- Mechanics and Maritime Sciences (M2)
- Niklas Andersson & Rickard Bensow C3SE 2026/1-9
- Microtechnology and Nanoscience (MC2)
- Elsebeth Schröder C3SE 2026/1-21
- Space, Earth and Environment (SEE)
- Wouter Vlemmings C3SE 2026/1-15
- Niclas Mattsson C3SE 2026/1-18
- Technology Management and Economics
- No PI assigned yet
- Communication and Learning in Science
- No PI assigned yet
GU and others¶
Following people have bought time on Vera:
- Department of Physics
- Mats Granath C3SE 408/25-1
- Department for Chemistry & Molecular Biology
- Richard Neutze C3SE 408/22-1
Note
For CSE students and researchers seeking access to the Minerva cluster, please contact Matti Karppa at karppa@chalmers.se.
The public webpage for the Minerva cluster, intended for students and other users, is available here.