Storage resources at C3SE
The storage hierarchy
Storage is available for different usage and different availability.
Looking at it from the bottom up, we have:
- The node local disk:
- available only to the running job
- automatically purged after the job finishes
- accessible to the job using $TMPDIR (the environment variable
$TMPDIRis automatically set to contain the correct directory path)
- the available size is different for different clusters (and can even differ between node types in a cluster)
- Cluster(wide) storage:
- available from all machines in a specific cluster
- not available at any C3SE-clusters today (see Centre storage below)
- Centre(wide) storage:
- available from all resources at the centre
- your cluster home directory is located here
- National accessible storage:
- requires a separate storage allocation through SNIC
- file based (as opposed to block based for those above. Cf. a FTP-server)
- available through using dedicated tools
C3SE Centre storage (Cephyr)
The centre storage is available on the resources at the centre.
You can the storage areas you have access to via
C3SE_quota which shows you your current usage and limits.
It contains two parts, one with backup available at and one without backup available for storage projects only. Users home directories (which are backed up) are:
For example, if your UID/CID is ada:
$SNIC_BACKUP is /cephyr/users/ada $HOME is /cephyr/users/ada/Vera on Vera $HOME is /cephyr/users/ada/Hebbe on Hebbe
How to check your usage and quota limits
You can check your quota limits by issuing the command
The default quota is:
|Storage location||Space quota||Files quota|
Note! Your home-directories on Vera and Hebbe are on the same file-system and accessible from both systems. They also share the same quota, so please move or access files directly instead of copying!
"Can I exceed my quota? What will happen?"
No, the quota limits are hard limits and your code will most likely crash or stop if you try to allocate over your limits.
Quota limit warnings
To obtain a warning for a custom limit, you can put the following in your
if [[ $- == *i* ]] then timeout 10 C3SE_quota -w --warn_limit=0.75 fi
and it will show up as a warning when you log in if you are close to the limit, e.g:
Path: /cephyr/users/ada Space used: 24.3GiB Quota: 30GiB - WARNING! 81% full! Files used: 1073 Quota: 60000
timeout command prevents the task from blocking your login if it takes to long.
Copying files into and out of the system
Use tools that can communicate using the SSH/SFTP/SCP-protocols to transfer files, for example FileZilla, WinSCP and rsync (or scp/sftp directly!).
If you're on the Chalmers network, i.e. have 129.16.* IP address, you can connect directly to the login node and copy your files.
If you're not on the Chalmers network you must either:
- connect using the Chalmers VPN
- initiate the transfers from within the C3SE clusters.
Finding where quota is used
/cephyr recursive size if shown on directories when listing files, so you can easily and quickly use it to locate where quota is used up. E.g:
ls -lh ~
drwxrwxr-x 1 emilia emilia 239M Jan 12 02:25 Mathematica
indicates that 239MiB of data is stored under the directory
Similarly, one can show the number of inodes (files and directories) used under a directory by using
entries: 5 files: 3 subdirs: 2 rentries: 223 rfiles: 218 rsubdirs: 5 rbytes: 226350145 rctime: 1578793680.240728864
rentries is the total number of files and directories under the directory
If you are unsure, please contact the support.
If you need more resources than is available for you as a user (see above), you, or your supervisior/PI needs to apply for a storage project on Cephyr. This is done through the SUPR-portal
If you are a member of a storage project, it will show up when using
If you have just joined a storage project, you must start a new login session for group memberships to update.
File sharing with groups and other users
You can also share files with other users by manipulating the group ownership and associated permissions of directories or files.
Every computational project has their own group, named "c3-project-name", e.g:
[emilia@hebbe ~]$ groups emilia c3-gaussian c3-snic2017-1-10
Here emilia is a member of project SNIC2017-1-10.
She wants to share files (read only), and she could do
[emilia@hebbe ~]$ chgrp -R c3-snic2017-1-10 shared_directory [emilia@hebbe ~]$ chmod -R g+rx shared_directory [emilia@hebbe ~]$ chmod o+x ~/ ~/..
The first two lines change the group, and the group rights recursively (applies to all files under
The last line gives the necessary execute permissions required to access directories under
~ and the directory above it..
Remember! If you give out write-permissions in a sub-directory, all files created in there, also by other users, will still count towards your quota.
Access Control Lists (ACLs)
To give more fine-grained control of file sharing, you can use ACLs. This allows you to give out different read, write, and execute permissions to individual users or groups.
If emilia wants to have a shared file storage with robert, and give out read rights to sara, she could do:
[emilia@hebbe ~]$ setfacl -R -m user:robert:rwx,user:sara:rx shared_data [emilia@hebbe ~]$ chmod o+x ~/ ~/..
and she can check the current rights using the corresponding get-command:
[emilia@hebbe ~]$ getfacl shared_data # file: /cephyr/users/emilia/Hebbe/shared_data # owner: emilia # group: emilia user::rwx user:robert:rwx user:sara:rx group::rwx mask::rwx other::r-x
You can find many examples using
getfacl online, e.g. https://linux.die.net/man/1/setfacl.
Using node local disk (
It is crucial that you use the node local disk for jobs that perform a lot of intense file IO. The globally accessible file system is a shared resource with limited capability and performance.
It is also crucial that you retrieve and save any important data that was produced and saved to the node local file system. The node local file systems are always wiped clean immediately after your job has ended!
To use use $TMPDIR, copy the files there, change to the directory, run your simulation, and copy the results back:
#!/bin/bash # ... various SLURM commands cp file1 file2 $TMPDIR cd $TMPDIR ... run your code ... cp results $SLURM_SUBMIT_DIR
Be certain that you retrieve and save any important data that was produced and saved on the node local file system.
The common size of $TMPDIR is 1600GB and 380GB on Hebbe and Vera respectively.
When running on a shared node, you will be allocated size on
$TMPDIR proportional to the number of cores you have on the node.
Note! As a default each node have a private
$TMPDIR share the same path, but point to different storage areas.
You have to make sure to distribute and collect files to all nodes if you use more than one node! Also see below for a shared, parallel
Distributing files to multiple
ptmpdir is usually a much simpler option, see below!
The job-script only executes on the main node (first node) in your job, therefore the job-script must
- distributes input files to all other nodes in the job
- collect output files from all other nodes
- copy the results back to the centre storage
To distribute files to the node local disks, use the command
pdcp. When invoked from within a job script,
pdcp automatically resolves which nodes are involved. Ex.
pdcp file1 file2 $TMPDIR
file2 from the current directory to the different
$TMPDIR on all nodes in the current job.
Collecting the data back from multiple nodes depends on the software used.
cp $TMPDIR/output_file.data $SLURM_SUBMIT_DIR
output_file.data from the head node only, whereas
rpdcp $TMPDIR/output.data $SLURM_SUBMIT_DIR
copies the files
$TMPDIR/output.data from all compute nodes in the job, and places them in
rpdcp commands takes the flag
-f for recursively copying file hierarchies.
A shared, parallel $TMPDIR
The nodes local disks can be set up to make up a shared, parallel area when running a job on more than 1 node. This will give you:
- a common namespace (i.e. all the nodes in your job can see the same files)
- a larger total area aggregating all nodes
- a faster file IO
To invoke a shared
$TMPDIR, simply add the flag
--gres=ptmpdir:1 to your job script.
#!/bin/bash # ... various SLURM commands #SBATCH --gres=ptmpdir:1
$TMPDIR will now use all active nodes local disks in parallel.
Copying files works as if it was one large drive.
It is recommended to always use this option if you use
$TMPDIR for multi-node jobs!
Saving files periodically
With a little bit of shell scripting, it is possible to periodically save files from
$TMPDIR to the centre storage.
Please implement this with reason so that you don't put excessive load on the shared file system (if you are unsure, ask firstname.lastname@example.org for advice).
A hypothetical example that creates a a few backup files once every second hour could look like
#!/bin/bash # ... various SBATCH flags while sleep 2h; do # This will be executed once every second hour rsync -a $TMPDIR/output_data/ $SLURM_SUBMIT_DIR/ # -u == --update skip files that are newer on the receiver done & # The &-sign after the done-keyword places the while-loop in a sub-shell in the background LOOPPID=$! # Save the PID of the subshell running the loop ... calculate stuff and retrieve data in a normal fashion ... # All calculations are done, let's clean up and kill the background loop kill $LOOPPID
This example would create a background loop that would be running on the head-compute node (the compute node in your allocation that run the batch script).