As of the expansion happening under 2022, the Vera cluster contains several hardware models. It runs Intel Xeon Gold 6130 (code-named "Skylake") CPU's and newer Intel(R) Xeon(R) Gold 6338 CPU and Platinum 8358 (code-named "Icelake") CPUs. All nodes have dual CPU sockets. It has T4, A40, V100, A100 NVidia GPUs and Infiniband network.
vera partition has:
|#nodes||CPU||#cores||RAM (GB)||TMPDIR (GB)||GPUS|
Login nodes are Skylake machines with 192GB of RAM and are equipped with NVIDIA P2000 for remote graphics.
Several local research groups have also purchased private partitions with additional nodes and GPUs. You can specific node information from slurm with:
sinfo -N -p vera -o %n,%m,%G,%b
The Skylake systems has a 25Gigabit Ethernet network used for logins has a 56 Gbps Infiniband high-speed/low-latency network for parallel computations and filesystem access. The servers are build by Supermicro and the compute node hardware by Intel, the system is delivered by Southpole. There are also 3 system servers used for accessing and managing the cluster.
The Icelake expansion has 25G Ethernet network for filesystem access and 100 Gbps Infiniband high-speed/low-latency network for parallel computations.
GPU cost on Vera🔗
Jobs "cost" based on the number of physical cores they allocate, plus
|Type||VRAM||Additional cost||FP16 TFLOP/s||FP32||FP64|
- Example: A job using a full node with a single T4 for 10 hours:
(32 + 6) * 10 = 380core hours
- Note: 16, 32, and 64 bit floating point performance differ greatly between these specialized GPUs. Pick the one most efficient for your application.
- Additional running cost is based on the price compared to a CPU node.
- You don't pay any extra for selecting a node with more memory; but you are typically competing for less available hardware.
If you need some kind of support (trouble logging in, how to run your software, etc.) please first
- Contact the PI of your project and see if he/she can help
- Talk with your fellow students/colleagues
- Contact C3SE support