Users with their normal home directory on Chalmers' central file server can access these files via /chalmers/users/[cid] on the Helios login machines (no, we will not add this to cluster nodes). Please note that this remotely mounted filesystem is much slower than the filesystems locally attached to the clusters.
On the C3SE-clusters there is about 2TB of storage for user files on Helios32/64 and about 1.6TB on Hive (filesystem: /users/unicc/[user id]). This filesystem (lika all other user-writeable filesystems on the C3SE clusters) is NOT backed up at all! So important files should be copied to your other home directory (at your department) for backup.
Also, this filesystem is distributed to the worker nodes from one fileserver, therefore performance of this filesystem is probably less than for the disk in your personal workstation.
To use the path to your C3SE home directory in i.e. scripts, use $HOME.
Job submission directory
Lets say you prepared your files in the $HOME/my_project/subtask_XY directory, an (easy) way of telling this to to your scripts is to use the $PBS_O_WORKDIR environment variable. The comment above regarding filesystem performance is also valid here.
So, 'cd $PBS_O_WORKDIR' always put you back where you submitted your job.
Node local disk
If your job uses a lot of disk, or if it access files a lot, it is beneficial (to your runtime and allover system performance) to use the local disk in the worker node. The system prepares a job-unique directory on this local disk for you to use, this directory (remember: job-unique and node-local) can be found in the environment variable $TMPDIR.
Here is a small example on how this could be used in your job-scripts:
- copy the files you need for you job to local disk
cp -p file1 file2 $TMPDIR/
- go there
- run your job
- copy output files back to job submission directory
cp -p file3 file4 $PBS_O_WORKDIR/
And probably, you can think of more ways to use this!
Note! This directory and all the files in it is removed when the job ends!