Vienna Scientific Cluster
High Performance Computing available to Staff Austrian HPC effort part of EuroCC
Links:
We have the privilege to be part of the VSC and have private nodes at VSC-5 (since 2022), VSC-4 (since 2020) and VSC-3 (since 2014), which is retired by 2022.
Access is primarily via SSH:
ssh to VSC | |
---|---|
1 2 |
|
Please follow some connection instruction on the wiki which is similar to all other servers (e.g. SRVX1). The VSC is only available from within the UNINET (VPN, ...). Authentication requires a mobile phone.
We have private nodes at our disposal and in order for you to use these you need to specify the correct account in the jobs you submit to the queueing system (SLURM). The correct information will be given to you in the registration email.
IMGW customizations in the shell
If you want you can use some shared shell scripts that provide information for users about the VSC system.
Load IMGW environment settings | |
---|---|
1 2 |
|
Please find the following commands available:
imgw-quota
shows the current quota on VSC for both HOME and DATAimgw-container
singularity/apptainer container run script, see belowimgw-transfersh
Transfer-sh service on wolke, easily share small files.imgw-cpuinfo
Show CPU information
Please find a shared folder in /gpfs/data/fs71386/imgw/shared
and add data there that needs to be used by multiple people. Please make sure that things are removed again as soon as possible. Thanks.
Node Information VSC-5
There are usually two sockets per Node, which means 2 CPUs per Node.
VSC-5 Compute Node | |
---|---|
1 2 3 4 |
|
We have access to 11 private Nodes of that kind. We also have access to 1 GPU node with Nvidia A100 accelerators. Find the partition information with:
VSC-5 Quality of Service | |
---|---|
1 2 3 4 5 6 7 |
|
Storage on VSC-5
the HOME and DATA partition are the same as on VSC-4.
since Fall 2023 there has been a major update. JET and VSC-5 are holding hands now. Your files on JET are now accessible from VSC-5. e.g.
JET and VSC-5 | |
---|---|
1 2 3 4 5 |
|
JETFS on VSC
Only from VSC5 you can access JETFS. Not the other way around.
You can use these directories as well for direct writing. The performance is higher on VSC-5 storage. This does not work on VSC-4.
Node Information VSC-4
VSC-4 Compute Node | |
---|---|
1 2 3 4 |
|
We have access to 5 private Nodes of that kind. We also have access to the jupyterhub on VSC. Check with
VSC-4 Quality of Service | |
---|---|
1 2 3 4 5 6 7 |
|
Storage on VSC-4
All quotas are shared between all IMGW/Project users:
$HOME
(up to 100 GB, all home directories)$DATA
(up to 10 TB, backed up)$BINFL
(up to 1TB, fast scratch), will be retired$BINFS
(up to 2GB, SSD fast), will be retired$TMPDIR
(50% of main memory, deletes after job finishes)/local
(Compute Nodes, 480 GB SSD, deletes after Job finishes)
Check quotas running the following commands yourself, including your PROJECTID or use the imgw-quota
command as from the imgw shell extensions
Check VSC-4 IMGW quotas | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
Other Storage
We have access to the Earth Observation Data Center EODC, where one can find primarily the following data sets:
- Sentinel-1, 2, 3
- Wegener Center GPS RO
These datasets can be found directly via /eodc/products/
.
We are given a private data storage location (/eodc/private/uniwien
), where we can store up to 22 TB on VSC-4. However, that might change in the future.
Run time limits and queues
VSC-5 queues and limits:
VSC-5 Queues | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The department has access to these partitions:
VSC5 available partitions with QOS | |
---|---|
1 2 3 4 5 6 7 8 |
|
VSC-4 queues and limits:
VSC-4 Queues | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
The department has access to these partitions:
VSC-4 available partitions with QOS | |
---|---|
1 2 3 4 5 |
|
**single/few core jobs are allocated to nodes n4901-0[01-72] and n4902-0[01-72] **
SLURM allows for setting a run time limit below the default QOS's run time limit. After the specified time is elapsed, the job is killed:
slurm time limit | |
---|---|
1 |
|
Acceptable time formats include minutes
, minutes:seconds
, hours:minutes:seconds
, days-hours
, days-hours:minutes
and days-hours:minutes:seconds
.
Example Job
Example Job on VSC
We have to use the following keywords to make sure that the correct partitions are used:
--partition=mem_xxxx
(per email)--qos=xxxxxx
(see below)--account=xxxxxx
(see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
Put this in the Job file (e.g. VSC-5 Nodes)
VSC slurm example job | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
- -J job name
- -N number of nodes requested (16 cores per node available)
- -n, --ntasks=
specifies the number of tasks to run, - --ntasks-per-node number of processes run in parallel on a single node
- --ntasks-per-core number of tasks a single core should work on
- srun is an alternative command to mpirun. It provides direct access to SLURM inherent variables and settings.
- -l adds task-specific labels to the beginning of all output lines.
- --mail-type sends an email at specific events. The SLURM doku lists the following valid mail-type values: "BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL and REQUEUE), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), and TIME_LIMIT_50 (reached 50 percent of time limit). Multiple type values may be specified in a comma separated list." cited from the SLURM doku
- --mail-user sends an email to this address
slurm basic commands | |
---|---|
1 2 3 4 |
|
Example of multiple simulations inside one job
Sample Job when for running multiple mpi jobs on a VSC-4 node.
Note: The “mem_per_task” should be set such that
mem_per_task * mytasks < mem_per_node - 2Gb
The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.:
23 Gb * 4 = 92 Gb < 94 Gb
VSC-4 example concurrent job | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Software
The VSC use the same software system as Jet and have environmental modules available to the user:
- VSC Wiki Software
- VSC-4 has
miniconda3
modules for GNU and INTEL ;)
VSC modules | |
---|---|
1 2 3 4 5 |
|
will load the intel compiler suite and add variables to your environment. Please do not forget to add the module load statements to your jobs.
on how to use environment modules go to Using Environment Modules
Import user-site packages
It is possible to install user site packages into your .local/lib/python3.*
directory:
installing python packages in your HOME | |
---|---|
1 2 |
|
Please remember that all HOME and DATA quotas will be shared Installing a lot of packages creates a lot of files!
Python importing user site packages | |
---|---|
1 2 3 |
|
Then you will be able to load all packages that are located in the user site.
Containers
We can use complex software that is contained in apptainer containers and can be executed on VSC. Please consider using one of the following containers:
PyMagic_202506.sif
JyMagic_202506.sif
JyMet_202506.sif
located in the $DATA
directory of IMGW: /gpfs/data/fs71386/imgw
The Jupyter containers, can be run as well locally to open a jupyterlab environment, accessible via a port (8888) on localhost.
If you want to build your own container, you can use the script: micromamba2container.sh as described here. You can convert an existing environment into a container image or create a new one. Advanced: Take a look at the Apptainer.recipe file to understand how to modify it to your needs.
If you are interested in deploying a customized Jupyter kernel on the VSC Jupyterhub, have a look at this introduction.
How to use?
There are multiple ways of running these containers.
- using the container itself
- using a runscript
using the command line
Bash | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 |
|
using the script
The script includes the automated module load command for running the container with singularity/apptainer and sets the BIND commands to map all the necessary directories inside the container. Check the $SINGULARITY_BIND
variable. This can be manually set as well or inside your .bashrc
.
Bash | |
---|---|
1 2 3 4 5 6 7 8 9 10 |
|
Understanding the container
In principle, a run script needs to do only 3 things:
- load the module
apptainer
orsingularity
- set
APPTAINER_BIND
orSINGULARITY_BIND
environment variable - execute the container with your arguments
It is necessary to set the SINGULARITY_BIND
because the $HOME
and $DATA
or $BINFS
path are no standard linux paths, therefore the container linux does not know about these and accessing files from within the container is not possible. In the future if you have problems with accessing other paths, adding them to the SINGULARITY_BIND
might fix the issue.
What is inside the container?
In principe you can check what is inside by using
Execute commands inside a container | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Debugging on VSC-4
Currently (6.2021) there is no development queue on VSC-4 and the support suggested to do the following:
Debuging on VSC-4 | |
---|---|
1 2 3 4 5 6 7 8 |
|
otherwise you can access one of the *_devel
queues/partitions and submit short test jobs to check your setup.