The Hebbe cluster is built on Intel 2650v3 (code-named "haswell") CPU's. The system consists of:
- In total 315 compute nodes (total of 6300 cores) with 26 TiB of RAM and 6 GPUs. More specific:
- 260 compute nodes with 20 cores and 64 GB of RAM (249 of these available for SNIC users)
- 38 compute nodes with 20 cores and 128 GB of RAM (30 of these available for SNIC users)
- 7 compute nodes with 20 cores and 256 GB of RAM (not available for SNIC users)
- 3 compute nodes with 20 cores and 512 GB of RAM (1 of these available for SNIC users)
- 1 compute node with 20 cores and 1024 GB of RAM
- 4 compute nodes with 20 cores, 64 GB of RAM and 1 NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
- 2 compute nodes with 20 cores, 256 GB of RAM and NVIDIA k4200 for remote graphics
There are also 3 system servers used for accessing and managing the cluster.
There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.
The server and compute node hardware is built by HP and delivered by GoVirtual.
MStud is a private partition and login node
hebbe-mstud.c3se.chalmers.seon Hebbe owned by the M-program at Chalmers. As of November 2019, there is also a private partition on Vera.
Teachers at the M-program interested in obtaining access for courses or projects should contact Mikael Enelund for more details on how to apply.
The Hebbe partition consists of 4 compute nodes and a dedicated login node. All five nodes is running 20 core Intel 2650v3 (code-named "haswell") CPU's with 64GB of RAM and Infinifyband interconnect.
The Vera partition consists of 4 compute nodes equipped with Intel Xeon Gold (code-name "skylake") CPUs with 96GB of RAM. The login node is shared with all other Vera users.
Storage area, internal networks and software are shared together with Hebbe and Vera respectively.
All Software installed on Hebbe and Vera is also made available for MStud. The login node can also be accessed graphically (with hardware accelerated graphics) via Thinlinc thanks to its NVidia Quadro P2000 graphics card and is suitable for interactive pre/post-processing work. Some graphical software available there are ABAQUS CAE, ANSYS Workbench, COMSOL, GNU Octave, Mathematica, MATLAB, ParaView, Simpack, and STAR-View+.
The teacher who will manage the course needs to apply for a project in SUPR using the link to the Mstud round given by Mikael Enelund. This round is unlisted, and can't be accessed without this link.
New users need to go through the steps listed in Getting access.
Note that there several manual steps in this process
When submitting jobs, in addition to specifying your project, you must also specify the
mstud partition, e.g:
#SBATCH -p mstud
in your job script.
If you need some kind of support (trouble logging in, how to run your software, etc.) please first
- Contact the PI of your project and see if he/she can help
- Talk with your fellow students/colleagues
- Contact C3SE support