Vienna Scientific Cluster
High Performance Computing available to Staff
Austrian HPC effort
part of EuroCC
Links:
We have the privilege to be part of the VSC and have private nodes at VSC-5 (since 2022), VSC-4 (since 2020) and VSC-3 (since 2014), which is retired by 2022.
Access is primarily via SSH:
ssh to VSC |
---|
| ssh user@vsc5.vsc.ac.at
ssh user@vsc4.vsc.ac.at
|
Please follow some connection instruction on the wiki which is similar to all other servers (e.g. SRVX1).
The VSC is only available from within the UNINET (VPN, ...). Authentication requires a mobile phone.
We have private nodes at our disposal and in order for you to use these you need to specify the correct account in the jobs you submit to the queueing system (SLURM). The correct information will be given to you in the registration email.
IMGW customizations in the shell
If you want you can use some shared shell scripts that provide information for users about the VSC system.
Load IMGW environment settings |
---|
| # run the install script, that just appends to your PATH variable.
/gpfs/data/fs71386/imgw/install_imgw.sh
|
Please find the following commands available:
imgw-quota
shows the current quota on VSC for both HOME and DATA
imgw-container
singularity/apptainer container run script, see below
imgw-transfersh
Transfer-sh service on wolke, easily share small files.
imgw-cpuinfo
Show CPU information
Please find a shared folder in /gpfs/data/fs71386/imgw/shared
and add data there that needs to be used by multiple people. Please make sure that things are removed again as soon as possible. Thanks.
There are usually two sockets per Node, which means 2 CPUs per Node.
VSC-5 Compute Node |
---|
| CPU model: AMD EPYC 7713 64-Core Processor
2 CPU, 64 physical cores per CPU, total 256 logical CPU units
512 GB Memory
|
We have access to 11 private Nodes of that kind. We also have access to 1 GPU node with Nvidia A100 accelerators. Find the partition information with:
VSC-5 Quality of Service |
---|
| $ sqos
qos name type total res used res free res walltime priority total n* used n* free n*
================================================================================================================================
p71386_0512 cpu 2816 2816 0 10-00:00:00 100000 11 11 0
p71386_a100dual gpu 2 0 2 10-00:00:00 100000 1 0 1
* node values do not always align with resource values since nodes can be partially allocated
|
Storage on VSC-5
the HOME and DATA partition are the same as on VSC-4.
since Fall 2023 there has been a major update. JET and VSC-5 are holding hands now. Your files on JET are now accessible from VSC-5. e.g.
JET and VSC-5 |
---|
| a directory on JET
/jetfs/home/[username]
can be found on VSC-5
/gpfs/jetfs/home/[username]
|
You can use these directories as well for direct writing. The performance is higher on VSC-5 storage. This does not work on VSC-4.
VSC-4 Compute Node |
---|
| CPU model: Intel(R) Xeon(R) Platinum 8174 CPU @ 3.10GHz
2 CPU, 24 physical cores per CPU, total 96 logical CPU units
378 GB Memory
|
We have access to 5 private Nodes of that kind. We also have access to the jupyterhub on VSC. Check with
VSC-4 Quality of Service |
---|
| $ sqos
qos name type total res used res free res walltime priority total n* used n* free n*
================================================================================================================================
p71386_0384 cpu 480 288 192 10-00:00:00 100000 5 3 2
skylake_0096_jupyter cpu 288 12 276 3-00:00:00 1000 3 1 2
* node values do not always align with resource values since nodes can be partially allocated
|
Storage on VSC-4
All quotas are shared between all IMGW/Project users:
$HOME
(up to 100 GB, all home directories)
$DATA
(up to 10 TB, backed up)
$BINFL
(up to 1TB, fast scratch), will be retired
$BINFS
(up to 2GB, SSD fast), will be retired
$TMPDIR
(50% of main memory, deletes after job finishes)
/local
(Compute Nodes, 480 GB SSD, deletes after Job finishes)
Check quotas running the following commands yourself, including your PROJECTID or use the imgw-quota
command as from the imgw shell extensions
Check VSC-4 IMGW quotas |
---|
| $ mmlsquota --block-size auto -j data_fs71386 data
Block Limits | File Limits
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace Remarks
data FILESET 66.35T 117.2T 117.2T 20.45G none | 4597941 5000000 5000000 1632 none vsc-storage.vsc4.opa
$ mmlsquota --block-size auto -j home_fs71386 home
Block Limits | File Limits
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace Remarks
home FILESET 182.7G 200G 200G 921.6M none | 1915938 2000000 2000000 1269 none vsc-storage.vsc4.opa
|
Other Storage
We have access to the Earth Observation Data Center EODC, where one can find primarily the following data sets:
- Sentinel-1, 2, 3
- Wegener Center GPS RO
These datasets can be found directly via /eodc/products/
.
We are given a private data storage location (/eodc/private/uniwien
), where we can store up to 22 TB on VSC-4. However, that might change in the future.
Run time limits and queues
VSC-5 queues and limits:
VSC-5 Queues |
---|
1
2
3
4
5
6
7
8
9
10
11
12
13 | $ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s
Name Priority GrpNodes MaxWall Descr
-------------------- ---------- -------- ----------- ----------------------------------------
normal 0 1-00:00:00 Normal QOS default
p71386_0384 100000 10-00:00:00 private nodes haimberger
zen3_0512_a100x2 1000 3-00:00:00 public qos for a100 gpu nodes
zen3_0512 1000 3-00:00:00 vsc-5 regular cpu nodes with 512 gb of +
zen3_0512_devel 5000000 00:10:00 fast short qos for dev jobs
zen3_1024 1000 3-00:00:00 vsc-5 regular cpu nodes with 1024 gb of+
zen3_2048 1000 3-00:00:00 vsc-5 regular cpu nodes with 2048 gb of+
idle_0512 1 1-00:00:00 vsc-5 idle nodes
idle_1024 1 1-00:00:00 vsc5 idle nodes
idle_2048 1 1-00:00:00 vsc5 idle nodes
|
The department has access to these partitions:
VSC5 available partitions with QOS |
---|
| partition QOS
------------------------------------------------
cascadelake_0384 cascadelake_0384
zen2_0256_a40x2 zen2_0256_a40x2
zen3_0512_a100x2 zen3_0512_a100x2
zen3_0512 zen3_0512,zen3_0512_devel
zen3_1024 zen3_1024
zen3_2048 zen3_2048
|
VSC-4 queues and limits:
VSC-4 Queues |
---|
| $ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s
Name Priority GrpNodes MaxWall Descr
-------------------- ---------- -------- ----------- ----------------------------------------
p71386_0384 100000 10-00:00:00 private nodes haimberger
long 1000 10-00:00:00 long running jobs on vsc-4
fast_vsc4 1000000 3-00:00:00 high priority access
mem_0096 1000 3-00:00:00 vsc-4 regular nodes with 96 gb of memory
mem_0384 1000 3-00:00:00 vsc-4 regular nodes with 384 gb of memo+
mem_0768 1000 3-00:00:00 vsc-4 regular nodes with 768 gb of memo+
|
The department has access to these partitions:
VSC-4 available partitions with QOS |
---|
| partition QOS
--------------------------
skylake_0096 skylake_0096,skylake_0096_devel
skylake_0384 skylake_0384
skylake_0768 skylake_0768
|
**single/few core jobs are allocated to nodes n4901-0[01-72] and n4902-0[01-72] **
SLURM allows for setting a run time limit below the default QOS's run time limit. After the specified time is elapsed, the job is killed:
Acceptable time formats include minutes
, minutes:seconds
, hours:minutes:seconds
, days-hours
, days-hours:minutes
and days-hours:minutes:seconds
.
Example Job
Example Job on VSC
We have to use the following keywords to make sure that the correct partitions are used:
--partition=mem_xxxx
(per email)
--qos=xxxxxx
(see below)
--account=xxxxxx
(see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
Put this in the Job file (e.g. VSC-5 Nodes)
VSC slurm example job |
---|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17 | #!/bin/bash
#
#SBATCH -J TEST_JOB
#SBATCH -N 2
#SBATCH --ntasks-per-node=16
#SBATCH --ntasks-per-core=1
#SBATCH --mail-type=BEGIN # first have to state the type of event to occur
#SBATCH --mail-user=<email@address.at> # and then your email address
#SBATCH --partition=zen3_0512
#SBATCH --qos=p71386_0512
#SBATCH --account=p71386
#SBATCH --time=<time>
# when srun is used, you need to set (Different from Jet):
<srun -l -N2 -n32 a.out >
# or
<mpirun -np 32 a.out>
|
- -J job name
- -N number of nodes requested (16 cores per node available)
- -n, --ntasks= specifies the number of tasks to run,
- --ntasks-per-node number of processes run in parallel on a single node
- --ntasks-per-core number of tasks a single core should work on
- srun is an alternative command to mpirun. It provides direct access to SLURM inherent variables and settings.
- -l adds task-specific labels to the beginning of all output lines.
- --mail-type sends an email at specific events. The SLURM doku lists the following valid mail-type values: "BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL and REQUEUE), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), and TIME_LIMIT_50 (reached 50 percent of time limit). Multiple type values may be specified in a comma separated list." cited from the SLURM doku
- --mail-user sends an email to this address
slurm basic commands |
---|
| sbatch check.slrm # to submit the job
squeue -u `whoami` # to check the status of own jobs
scancel JOBID # for premature removal, where JOBID
# is obtained from the previous command
|
Example of multiple simulations inside one job
Sample Job when for running multiple mpi jobs on a VSC-4 node.
Note: The “mem_per_task” should be set such that
mem_per_task * mytasks < mem_per_node - 2Gb
The approx 2Gb reduction in available memory is due to operating system stored in memory. For a standard node with 96 Gb of Memory this would be eg.:
23 Gb * 4 = 92 Gb < 94 Gb
VSC-4 example concurrent job |
---|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17 | #!/bin/bash
#SBATCH -J many
#SBATCH -N 1
# ... other slurm directives
# disable resources consumption by subsequent srun calls.
export SLURM_STEP_GRES=none
mytasks=4
cmd="stress -c 24"
mem_per_task=10G
for i in `seq 1 $mytasks`
do
srun --mem=$mem_per_task --cpus-per-task=2 --ntasks=1 $cmd &
done
wait
|
Software
The VSC use the same software system as Jet and have environmental modules available to the user:
VSC modules |
---|
| module avail # lists the **available** Application-Software,
# Compilers, Parallel-Environment, and Libraries
module list # shows currently loaded package of your session
module unload <xyz> # unload a particular package <xyz> from your session
module load <xyz> # load a particular package <xyz> into your session
|
will load the intel compiler suite and add variables to your environment.
Please do not forget to add the module load statements to your jobs.
on how to use environment modules go to Using Environment Modules
Import user-site packages
It is possible to install user site packages into your .local/lib/python3.*
directory:
installing python packages in your HOME |
---|
| # installing a user site package
pip install --user [package]
|
Please remember that all HOME and DATA quotas will be shared Installing a lot of packages creates a lot of files!
Python importing user site packages |
---|
| import sys, site
sys.path.append(site.site.getusersitepackages())
# This will add the correct path.
|
Then you will be able to load all packages that are located in the user site.
Containers
We can use complex software that is contained in singularity containers (doc) and can be executed on VSC-4. Please consider using one of the following containers:
py3centos7anaconda3-2020-07-dev
located in the $DATA
directory of IMGW: /gpfs/data/fs71386/imgw
How to use?
Currently there is only one container with a run script.
Bash |
---|
| # The directory of the containers
/gpfs/data/fs71386/imgw/run.sh [arguments]
# executing the python inside
/gpfs/data/fs71386/imgw/run.sh python
# or ipython
/gpfs/data/fs71386/imgw/run.sh ipython
# with other arguments
/gpfs/data/fs71386/imgw/run.sh python analyis.py
|
Understanding the container
In principle, a run script needs to do only 3 things:
- load the module
singularity
- set
SINGULARITY_BIND
environment variable
- execute the container with your arguments
It is necessary to set the SINGULARITY_BIND
because the $HOME
and $DATA
or $BINFS
path are no standard linux paths, therefore the container linux does not know about these and accessing files from within the container is not possible. In the future if you have problems with accessing other paths, adding them to the SINGULARITY_BIND
might fix the issue.
In principe one can execute the container like this:
Bash |
---|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15 | # check if the module is loaded
$ module load singularity
# just run the container initiating the building runscript (running ipython):
$ /gpfs/data/fs71386/imgw/py3centos7anaconda3-2020-07-dev.sif
Python 3.8.3 (default, Jul 2 2020, 16:21:59)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]:
In [2]: %env DATA
Out[2]: '/gpfs/data/fs71386/USER'
In [3]: ls /gpfs/data/fs71386/USER
ls: cannot access /gpfs/data/fs71386/USER: No such file or directory
# Please note here that the path is not available, because we did not use the SINGULARITY_BIND
|
What is inside the container?
In principe you can check what is inside by using
Inspect a Singularity/Apptainer container |
---|
1
2
3
4
5
6
7
8
9
10
11
12
13
14 | $ module load singularity
$ singularity inspect py3centos7anaconda3-2020-07-dev.sif
author: M.Blaschek
dist: anaconda2020.07
glibc: 2.17
org.label-schema.build-arch: amd64
org.label-schema.build-date: Thursday_7_October_2021_14:37:23_CEST
org.label-schema.schema-version: 1.0
org.label-schema.usage.singularity.deffile.bootstrap: docker
org.label-schema.usage.singularity.deffile.from: centos:7
org.label-schema.usage.singularity.deffile.stage: final
org.label-schema.usage.singularity.version: 3.8.1-1.el8
os: centos7
python: 3.8
|
which shows you some information on the container, e.g. Centos 7 is installed, python 3.8, and glibc 2.17.
But you can also check the applications inside
Execute commands inside a container |
---|
| # List all executables inside the container
$ py3centos7anaconda3-2020-07-dev.sif ls /opt/view/bin
# or using conda for the environment
$ py3centos7anaconda3-2020-07-dev.sif conda info
# for the package list
$ py3centos7anaconda3-2020-07-dev.sif conda list
|
which shows something like this
anaconda environment list
Text Only |
---|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413 | # packages in environment at /opt/software/linux-centos7-haswell/gcc-4.8.5/anaconda3-2020.07-xl53rxqkccbjdufemaupvtuhs3wsj5d2:
#
# Name Version Build Channel
_anaconda_depends 2020.07 py38_0
_ipyw_jlab_nb_ext_conf 0.1.0 py38_0
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_gnu conda-forge
alabaster 0.7.12 py_0
alsa-lib 1.2.3 h516909a_0 conda-forge
anaconda custom py38_1
anaconda-client 1.7.2 py38_0
anaconda-navigator 1.9.12 py38_0
anaconda-project 0.8.4 py_0
appdirs 1.4.4 pyh9f0ad1d_0 conda-forge
argh 0.26.2 py38_0
asciitree 0.3.3 py_2 conda-forge
asn1crypto 1.3.0 py38_0
astroid 2.4.2 py38_0
astropy 4.0.1.post1 py38h7b6447c_1
atomicwrites 1.4.0 py_0
attrs 19.3.0 py_0
autopep8 1.5.3 py_0
babel 2.8.0 py_0
backcall 0.2.0 py_0
backports 1.0 py_2
backports.functools_lru_cache 1.6.1 py_0
backports.shutil_get_terminal_size 1.0.0 py38_2
backports.tempfile 1.0 py_1
backports.weakref 1.0.post1 py_1
beautifulsoup4 4.9.1 py38_0
bitarray 1.4.0 py38h7b6447c_0
bkcharts 0.2 py38_0
blas 1.0 mkl
bleach 3.1.5 py_0
blosc 1.21.0 h9c3ff4c_0 conda-forge
bokeh 2.1.1 py38_0
boto 2.49.0 py38_0
bottleneck 1.3.2 py38heb32a55_1
brotlipy 0.7.0 py38h7b6447c_1000
bzip2 1.0.8 h7b6447c_0
c-ares 1.17.2 h7f98852_0 conda-forge
ca-certificates 2021.7.5 h06a4308_1
cached-property 1.5.2 hd8ed1ab_1 conda-forge
cached_property 1.5.2 pyha770c72_1 conda-forge
cairo 1.16.0 h6cf1ce9_1008 conda-forge
cartopy 0.20.0 py38hf9a4893_2 conda-forge
cdo 1.9.10 h25e7f74_6 conda-forge
certifi 2021.5.30 py38h578d9bd_0 conda-forge
cffi 1.14.0 py38he30daa8_1
cftime 1.5.1 py38h6c62de6_0 conda-forge
chardet 3.0.4 py38_1003
click 7.1.2 py_0
cloudpickle 1.5.0 py_0
clyent 1.2.2 py38_1
colorama 0.4.3 py_0
conda 4.10.3 py38h578d9bd_2 conda-forge
conda-build 3.18.11 py38_0
conda-env 2.6.0 1
conda-package-handling 1.6.1 py38h7b6447c_0
conda-verify 3.4.2 py_1
contextlib2 0.6.0.post1 py_0
cryptography 2.9.2 py38h1ba5d50_0
curl 7.79.1 h2574ce0_1 conda-forge
cycler 0.10.0 py38_0
cython 0.29.21 py38he6710b0_0
cytoolz 0.10.1 py38h7b6447c_0
dask 2.20.0 py_0
dask-core 2.20.0 py_0
dbus 1.13.16 hb2f20db_0
decorator 4.4.2 py_0
defusedxml 0.6.0 py_0
diff-match-patch 20200713 py_0
distributed 2.20.0 py38_0
docutils 0.16 py38_1
eccodes 2.23.0 h11d1a29_2 conda-forge
entrypoints 0.3 py38_0
et_xmlfile 1.0.1 py_1001
expat 2.4.1 h9c3ff4c_0 conda-forge
fastcache 1.1.0 py38h7b6447c_0
fasteners 0.16.3 pyhd3eb1b0_0
fftw 3.3.10 nompi_hcdd671c_101 conda-forge
filelock 3.0.12 py_0
flake8 3.8.3 py_0
flask 1.1.2 py_0
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 hab24e00_0 conda-forge
fontconfig 2.13.1 hba837de_1005 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
freeglut 3.2.1 h9c3ff4c_2 conda-forge
freetype 2.10.4 h0708190_1 conda-forge
fribidi 1.0.10 h516909a_0 conda-forge
fsspec 0.7.4 py_0
future 0.18.2 py38_1
geos 3.9.1 h9c3ff4c_2 conda-forge
get_terminal_size 1.0.0 haa9412d_0
gettext 0.21.0 hf68c758_0
gevent 20.6.2 py38h7b6447c_0
glib 2.68.4 h9c3ff4c_0 conda-forge
glib-tools 2.68.4 h9c3ff4c_0 conda-forge
glob2 0.7 py_0
gmp 6.1.2 h6c8ec71_1
gmpy2 2.0.8 py38hd5f6e3b_3
graphite2 1.3.14 h23475e2_0
greenlet 0.4.16 py38h7b6447c_0
gst-plugins-base 1.18.5 hf529b03_0 conda-forge
gstreamer 1.18.5 h76c114f_0 conda-forge
h5netcdf 0.11.0 pyhd8ed1ab_0 conda-forge
h5py 3.4.0 nompi_py38hfbb2109_101 conda-forge
harfbuzz 3.0.0 h83ec7ef_1 conda-forge
hdf4 4.2.15 h10796ff_3 conda-forge
hdf5 1.12.1 nompi_h2750804_101 conda-forge
heapdict 1.0.1 py_0
html5lib 1.1 py_0
icu 68.1 h58526e2_0 conda-forge
idna 2.10 py_0
imageio 2.9.0 py_0
imagesize 1.2.0 py_0
importlib-metadata 1.7.0 py38_0
importlib_metadata 1.7.0 0
importlib_resources 5.2.2 pyhd8ed1ab_0 conda-forge
intel-openmp 2020.1 217
intervaltree 3.0.2 py_1
ipykernel 5.3.2 py38h5ca1d4c_0
ipython 7.16.1 py38h5ca1d4c_0
ipython_genutils 0.2.0 py38_0
ipywidgets 7.5.1 py_0
isort 4.3.21 py38_0
itsdangerous 1.1.0 py_0
jasper 2.0.14 ha77e612_2 conda-forge
jbig 2.1 hdba287a_0
jdcal 1.4.1 py_0
jedi 0.17.1 py38_0
jeepney 0.4.3 py_0
jinja2 2.11.2 py_0
joblib 0.16.0 py_0
jpeg 9d h516909a_0 conda-forge
json5 0.9.5 py_0
jsonschema 3.2.0 py38_0
jupyter 1.0.0 py38_7
jupyter_client 6.1.6 py_0
jupyter_console 6.1.0 py_0
jupyter_core 4.6.3 py38_0
jupyterlab 2.1.5 py_0
jupyterlab_server 1.2.0 py_0
keyring 21.2.1 py38_0
kiwisolver 1.2.0 py38hfd86e86_0
krb5 1.19.2 hcc1bbae_0 conda-forge
lazy-object-proxy 1.4.3 py38h7b6447c_0
lcms2 2.11 h396b838_0
ld_impl_linux-64 2.33.1 h53a641e_7
lerc 2.2.1 h9c3ff4c_0 conda-forge
libaec 1.0.6 h9c3ff4c_0 conda-forge
libarchive 3.5.2 hccf745f_1 conda-forge
libblas 3.9.0 1_h86c2bf4_netlib conda-forge
libcblas 3.9.0 5_h92ddd45_netlib conda-forge
libclang 11.1.0 default_ha53f305_1 conda-forge
libcurl 7.79.1 h2574ce0_1 conda-forge
libdeflate 1.7 h7f98852_5 conda-forge
libedit 3.1.20191231 h14c3975_1
libev 4.33 h516909a_1 conda-forge
libevent 2.1.10 h9b69904_4 conda-forge
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1d223b6_9 conda-forge
libgfortran-ng 11.2.0 h69a702a_9 conda-forge
libgfortran5 11.2.0 h5c6108e_9 conda-forge
libglib 2.68.4 h3e27bee_0 conda-forge
libglu 9.0.0 he1b5a44_1001 conda-forge
libgomp 11.2.0 h1d223b6_9 conda-forge
libiconv 1.16 h516909a_0 conda-forge
liblapack 3.9.0 5_h92ddd45_netlib conda-forge
liblief 0.10.1 he6710b0_0
libllvm11 11.1.0 hf817b99_2 conda-forge
libllvm9 9.0.1 h4a3c616_1
libnetcdf 4.8.1 nompi_hb3fd0d9_101 conda-forge
libnghttp2 1.43.0 h812cca2_1 conda-forge
libogg 1.3.5 h27cfd23_1
libopus 1.3.1 h7f98852_1 conda-forge
libpng 1.6.37 hbc83047_0
libpq 13.3 hd57d9b9_0 conda-forge
libsodium 1.0.18 h7b6447c_0
libsolv 0.7.16 h8b12597_0 conda-forge
libspatialindex 1.9.3 he6710b0_0
libssh2 1.10.0 ha56f1ee_2 conda-forge
libstdcxx-ng 11.2.0 he4da1e4_9 conda-forge
libtiff 4.3.0 hf544144_1 conda-forge
libtool 2.4.6 h7b6447c_5
libuuid 2.32.1 h14c3975_1000 conda-forge
libvorbis 1.3.7 he1b5a44_0 conda-forge
libwebp-base 1.2.1 h7f98852_0 conda-forge
libxcb 1.14 h7b6447c_0
libxkbcommon 1.0.3 he3ba5ed_0 conda-forge
libxml2 2.9.12 h72842e0_0 conda-forge
libxslt 1.1.33 h15afd5d_2 conda-forge
libzip 1.8.0 h4de3113_1 conda-forge
libzlib 1.2.11 h36c2ea0_1013 conda-forge
llvmlite 0.33.0 py38hc6ec683_1
locket 0.2.0 py38_1
lxml 4.6.3 py38hf1fe3a4_0 conda-forge
lz4-c 1.9.3 h9c3ff4c_1 conda-forge
lzo 2.10 h7b6447c_2
magics 4.9.1 hb6e17df_1 conda-forge
magics-python 1.5.6 pyhd8ed1ab_0 conda-forge
mamba 0.5.1 py38h6fd9b40_0 conda-forge
markupsafe 1.1.1 py38h7b6447c_0
matplotlib 3.4.3 py38h578d9bd_1 conda-forge
matplotlib-base 3.4.3 py38hf4fb855_0 conda-forge
mccabe 0.6.1 py38_1
metpy 1.1.0 pyhd8ed1ab_0 conda-forge
mistune 0.8.4 py38h7b6447c_1000
mkl 2020.1 217
mkl-service 2.3.0 py38he904b0f_0
mkl_fft 1.1.0 py38h23d657b_0
mkl_random 1.1.1 py38h0573a6f_0
mock 4.0.2 py_0
more-itertools 8.4.0 py_0
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpmath 1.1.0 py38_0
msgpack-python 1.0.0 py38hfd86e86_1
multipledispatch 0.6.0 py38_0
mysql-common 8.0.25 ha770c72_2 conda-forge
mysql-libs 8.0.25 hfa10184_2 conda-forge
navigator-updater 0.2.1 py38_0
nbconvert 5.6.1 py38_0
nbformat 5.0.7 py_0
ncurses 6.2 he6710b0_1
netcdf4 1.5.7 nompi_py38h2823cc8_103 conda-forge
networkx 2.4 py_1
nltk 3.5 py_0
nose 1.3.7 py38_2
notebook 6.0.3 py38_0
nspr 4.30 h9c3ff4c_0 conda-forge
nss 3.69 hb5efdd6_1 conda-forge
numba 0.50.1 py38h0573a6f_1
numcodecs 0.9.1 py38h709712a_0 conda-forge
numexpr 2.7.1 py38h423224d_0
numpy 1.19.2 py38h54aff64_0
numpy-base 1.19.2 py38hfa32c7d_0
numpydoc 1.1.0 py_0
olefile 0.46 py_0
openpyxl 3.0.4 py_0
openssl 1.1.1l h7f98852_0 conda-forge
ossuuid 1.6.2 hf484d3e_1000 conda-forge
packaging 20.4 py_0
pandas 1.0.5 py38h0573a6f_0
pandoc 2.10 0
pandocfilters 1.4.2 py38_1
pango 1.48.10 h54213e6_2 conda-forge
parso 0.7.0 py_0
partd 1.1.0 py_0
patchelf 0.11 he6710b0_0
path 13.1.0 py38_0
path.py 12.4.0 0
pathlib2 2.3.5 py38_0
pathtools 0.1.2 py_1
patsy 0.5.1 py38_0
pcre 8.45 h9c3ff4c_0 conda-forge
pep8 1.7.1 py38_0
pexpect 4.8.0 py38_0
pickleshare 0.7.5 py38_1000
pillow 7.2.0 py38hb39fc2d_0
pint 0.17 pyhd8ed1ab_1 conda-forge
pip 20.1.1 py38_1
pixman 0.40.0 h7b6447c_0
pkginfo 1.5.0.1 py38_0
pluggy 0.13.1 py38_0
ply 3.11 py38_0
pooch 1.5.1 pyhd8ed1ab_0 conda-forge
proj 8.1.1 h277dcde_2 conda-forge
prometheus_client 0.8.0 py_0
prompt-toolkit 3.0.5 py_0
prompt_toolkit 3.0.5 0
psutil 5.7.0 py38h7b6447c_0
ptyprocess 0.6.0 py38_0
py 1.9.0 py_0
py-lief 0.10.1 py38h403a769_0
pycodestyle 2.6.0 py_0
pycosat 0.6.3 py38h7b6447c_1
pycparser 2.20 py_2
pycurl 7.43.0.5 py38h1ba5d50_0
pydocstyle 5.0.2 py_0
pyflakes 2.2.0 py_0
pygments 2.6.1 py_0
pylint 2.5.3 py38_0
pyodbc 4.0.30 py38he6710b0_0
pyopenssl 19.1.0 py_1
pyparsing 2.4.7 py_0
pyproj 3.2.1 py38h80797bf_2 conda-forge
pyqt 5.12.3 py38h578d9bd_7 conda-forge
pyqt-impl 5.12.3 py38h7400c14_7 conda-forge
pyqt5-sip 4.19.18 py38h709712a_7 conda-forge
pyqtchart 5.12 py38h7400c14_7 conda-forge
pyqtwebengine 5.12.1 py38h7400c14_7 conda-forge
pyrsistent 0.16.0 py38h7b6447c_0
pyshp 2.1.3 pyh44b312d_0 conda-forge
pysocks 1.7.1 py38_0
pytables 3.6.1 py38hdb04529_4 conda-forge
pytest 5.4.3 py38_0
python 3.8.3 hcff3b4d_2
python-dateutil 2.8.1 py_0
python-jsonrpc-server 0.3.4 py_1
python-language-server 0.34.1 py38_0
python-libarchive-c 2.9 py_0
python_abi 3.8 2_cp38 conda-forge
pytz 2020.1 py_0
pywavelets 1.1.1 py38h7b6447c_0
pyxdg 0.26 py_0
pyyaml 5.3.1 py38h7b6447c_1
pyzmq 19.0.1 py38he6710b0_1
qdarkstyle 2.8.1 py_0
qt 5.12.9 hda022c4_4 conda-forge
qtawesome 0.7.2 py_0
qtconsole 4.7.5 py_0
qtpy 1.9.0 py_0
readline 8.1 h46c0cb4_0 conda-forge
regex 2020.6.8 py38h7b6447c_0
requests 2.24.0 py_0
ripgrep 11.0.2 he32d670_0
rope 0.17.0 py_0
rtree 0.9.4 py38_1
ruamel_yaml 0.15.87 py38h7b6447c_1
scikit-image 0.16.2 py38h0573a6f_0
scikit-learn 0.23.1 py38h423224d_0
scipy 1.7.1 py38h56a6a73_0 conda-forge
seaborn 0.10.1 py_0
secretstorage 3.1.2 py38_0
send2trash 1.5.0 py38_0
setuptools 49.2.0 py38_0
shapely 1.7.1 py38hb7fe4a8_5 conda-forge
simplegeneric 0.8.1 py38_2
simplejson 3.17.5 py38h497a2fe_0 conda-forge
singledispatch 3.4.0.3 py38_0
sip 4.19.13 py38he6710b0_0
six 1.15.0 py_0
snappy 1.1.8 he6710b0_0
snowballstemmer 2.0.0 py_0
sortedcollections 1.2.1 py_0
sortedcontainers 2.2.2 py_0
soupsieve 2.0.1 py_0
sphinx 3.1.2 py_0
sphinxcontrib 1.0 py38_1
sphinxcontrib-applehelp 1.0.2 py_0
sphinxcontrib-devhelp 1.0.2 py_0
sphinxcontrib-htmlhelp 1.0.3 py_0
sphinxcontrib-jsmath 1.0.1 py_0
sphinxcontrib-qthelp 1.0.3 py_0
sphinxcontrib-serializinghtml 1.1.4 py_0
sphinxcontrib-websupport 1.2.3 py_0
spyder 4.1.4 py38_0
spyder-kernels 1.9.2 py38_0
sqlalchemy 1.3.18 py38h7b6447c_0
sqlite 3.36.0 h9cd32fc_2 conda-forge
statsmodels 0.11.1 py38h7b6447c_0
sympy 1.6.1 py38_0
tbb 2020.0 hfd86e86_0
tblib 1.6.0 py_0
terminado 0.8.3 py38_0
testpath 0.4.4 py_0
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 hbc83047_0
toml 0.10.1 py_0
toolz 0.10.0 py_0
tornado 6.0.4 py38h7b6447c_1
tqdm 4.47.0 py_0
traitlets 4.3.3 py38_0 conda-forge
typing_extensions 3.7.4.2 py_0
udunits2 2.2.27.27 hc3e0081_2 conda-forge
ujson 1.35 py38h7b6447c_0
unicodecsv 0.14.1 py38_0
unixodbc 2.3.7 h14c3975_0
urllib3 1.25.9 py_0
watchdog 0.10.3 py38_0
wcwidth 0.2.5 py_0
webencodings 0.5.1 py38_1
werkzeug 1.0.1 py_0
wheel 0.34.2 py38_0
widgetsnbextension 3.5.1 py38_0
wrapt 1.11.2 py38h7b6447c_0
wurlitzer 2.0.1 py38_0
xarray 0.19.0 pyhd8ed1ab_1 conda-forge
xlrd 1.2.0 py_0
xlsxwriter 1.2.9 py_0
xlwt 1.3.0 py38_0
xmltodict 0.12.0 py_0
xorg-fixesproto 5.0 h14c3975_1002 conda-forge
xorg-inputproto 2.3.2 h14c3975_1002 conda-forge
xorg-kbproto 1.0.7 h14c3975_1002 conda-forge
xorg-libice 1.0.10 h516909a_0 conda-forge
xorg-libsm 1.2.3 hd9c2040_1000 conda-forge
xorg-libx11 1.7.2 h7f98852_0 conda-forge
xorg-libxau 1.0.9 h14c3975_0 conda-forge
xorg-libxext 1.3.4 h7f98852_1 conda-forge
xorg-libxfixes 5.0.3 h7f98852_1004 conda-forge
xorg-libxi 1.7.10 h7f98852_0 conda-forge
xorg-libxrender 0.9.10 h7f98852_1003 conda-forge
xorg-renderproto 0.11.1 h14c3975_1002 conda-forge
xorg-xextproto 7.3.0 h14c3975_1002 conda-forge
xorg-xproto 7.0.31 h14c3975_1007 conda-forge
xz 5.2.5 h7b6447c_0
yaml 0.2.5 h7b6447c_0
yapf 0.30.0 py_0
zarr 2.10.1 pyhd8ed1ab_0 conda-forge
zeromq 4.3.2 he6710b0_2
zict 2.0.0 py_0
zipp 3.1.0 py_0
zlib 1.2.11 h36c2ea0_1013 conda-forge
zope 1.0 py38_1
zope.event 4.4 py38_0
zope.interface 4.7.1 py38h7b6447c_0
zstd 1.5.0 ha95c52a_0 conda-forge
|
Debugging on VSC-4
Currently (6.2021) there is no development queue on VSC-4 and the support suggested to do the following:
Debuging on VSC-4 |
---|
| # Request resources from slurm (-N 1, a full Node)
$ salloc -N 1 -p mem_0384 --qos p71386_0384 --no-shell
# Once the node is assigned / job is running
# Check with
$ squeue -u $USER
# connect to the Node with ssh
$ ssh [Node]
# test and debug the model there.
|
otherwise you can access one of the *_devel
queues/partitions and submit short test jobs to check your setup.
Last update:
June 4, 2024
Created:
January 26, 2023