Skip to content

WRF

WRF-ICON

The present manual contains basic information about running the Weather Research and Forecasting Model on workstations and clusters of the Department of Meteorology and Geophysics in Vienna.

Besides this basic guide, more detailed information is available too:

Basic WRF usage, including compilation instructions.
Advanced WRF usage.
Data assimilation with WRF.
Workflows for common WRF tasks.
Configuration files on different servers

What is WRF

WRF is a community-driven numerical weather prediction model, originally developed in the US in a collaboration between the research community (National Center for Atmospheric Research, NCAR, part of the University Corporation for atmospheric Research, UCAR and the National Weather Service (National Centers for Environmental Prediction, NCEP at the National Oceanic and Atmospheric Administration, NOAA).

Over the years, WRF evolved into two distinct models. ARW-WRF (Advanced Research WRF) is maintained by NCAR and is used by the research community. WRF-NMM is used operationally by the National Weather Service. We use ARW-WRF.

Most of the information about the ARW-WRF is accessible from the WRF users page. The formulation of the model (background theory, numerical aspects, dynamical core, parameterizations) is described in depth in a Technical description, which is periodically updated. The practical use of the model is described in a User guide. If you want to acknowledge use of WRF in a manuscript or thesis, and do not like to refer to grey literature, you can use the article by Skamarock and Klemp (2008) as a reference.

NCAR periodically organizes WRF tutorials (one-week workshops for beginners). The teaching material from the WRF tutorials is available online and is a great source of information. There is also an online tutorial that covers the basics of installing and running WRF.

There is also a users's forum, which can be a source of information on solutions to common problems. However most of the forum posts are about problems, and very few offer useful solutions. Navigating the forum in search of solutions is useless, but landing in the forum from a web source might be useful.

WRF and related programs run as executables on linux machines and clusters. Running WRF requires access to a linux terminal. If you work on Linux or Mac, this is trivial: just open a terminal window. If you work on windows, consider using a linux terminal emulator that supports X11 forwarding (a protocol that enables running interactive graphical applications on a remote server via ssh). There are several alternatives, one option that proved to work well is MobaXterm.

Getting WRF

Option 1: Get the WRF executables

If someone else already compiled it on the computer you'll be working with, you just need to:

  • Copy their WRF run directory. For instance on srvx1, some WRF run directories are available in /users/staff/serafin/RUN. You can get one with, for instance:
Bash
1
cp -rL /users/staff/serafin/RUN/WRFv4.4.2_scm/ .
  • Replicate exactly their compilation and runtime environment (see the chapter on "Setting up your environment" below).

Option 2: Get the source code

The WRF source code is available on Github, and there are several ways to get it.

  • Recommended: download one of the official releases: scroll down to the "Assets" section and choose one of the v*.tar.gz or v*zip files (not the "Source code" ones; these are incomplete).

To download while working on the terminal on a remote server, use wget or curl:

Bash
1
2
wget "https://github.com/wrf-model/WRF/releases/download/v4.4.2/v4.4.2.tar.gz"
curl -OL "https://github.com/wrf-model/WRF/archive/refs/tags/v4.4.2.zip"

To uncompress the source code, use either of the following (depending on the format):

Bash
1
2
tar xzvf v4.4.2.tar.gz
unzip v4.4.2.zip
  • Clone the repository in a local directory:
Bash
1
git clone --recurse-submodule https://github.com/wrf-model/WRF.git
  • You can also import WRF from Github into a new empty Gitlab project on Phaidra. To get access to Gitlab, look at the ZID guidlines and send a request via email to support.phaidra@univie.ac.at. Once you have access credentials, click on "New Project", then "Import Project", then "Repository by URL". This method might be advisable if you want to use Gitlab for tracking your own changes to the WRF code, but do not want to rely on the official repository.

Quick start with WRF

Setting up your environment

You need to make the operating system aware of the software libraries required to compile and run WRF, both at compile time and at run time.

This is done by loading environment modules, with module load on srvx1/jet/VSC4, and with spack load on VSC5.

It is useful to save the information about a specific environment in a simple bash shell script (for instance: modules_srvx1.sh). Then, before compiling or running WRF, type source modules_srvx1.sh). See here for a few examples.

Compiling

If you already have a compiled version of WRF, go to the next step.

Compiling WRF for an idealized simulation (LES), assuming that you have a properly set software environment:

Bash
1
2
./configure
./compile em_les > compile.log 2>&1 &

The process is similar for other idealized test cases. Just change the compile targe (e.g., em_hill2d_x).

Compiling WRF for a real-case simulation, assuming that you have a properly set software environment:

Bash
1
2
./configure
./compile em_real > compile.log 2>&1 &

Running

Running WRF for an idealized simulation (LES), assuming that you have a properly set software environment:

Bash
1
2
3
cd ./test/em_les
./ideal.exe
./wrf.exe

For other test cases, compilation might create a run_me_first.csh script in the same directory as the executables. If there is one, run it only once, before any other program. It will link any necessary lookup tables needed for the simulation (land-use, parameterizations, etc.).

Running WRF for a real_case simulation, assuming that you have a properly set software environment:

Bash
1
2
3
4
cd test/em_real
ln -s $WPS_PATH/met_em* .
./real.exe
./wrf.exe

The met_em* files linked in this snippet are the outcome of the WRF preprocessing (interpolation of initial and boundary conditions from another model, or from reanalyses, on the WRF grid), and they are supposed to be in a directory, to which the environmental variable $WPS_PATH points.

The WRF preprocessing system (WPS) is a separate set of executables, that can be built only after WRF is successfully compiled. To run WPS for a real-case simulation, getting initial and boundary conditions from ECMWF-IFS data on model levels, you could use a script such as the following. However, the script depends on namelists, variable tables and other settings files being correctly specified. See the detailed info pages for details.

wrf-run-script.sh
Example: wrf-run-script.sh
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash
set -eu

# Set paths
date=20190726.0000
gribdir=/users/staff/serafin/data/GRIB_IC_for_LAM/ECMWF/TEAMx_convection/

# Run WPS
./geogrid.exe
./link_grib.csh ${gribdir}/${date}/*
./ungrib.exe
./calc_ecmwf_p.exe
./avg_tsfc.exe
mpirun -np 32 ./metgrid.exe

# Archive results and clean up
archive=./archive/TEAMxConv_${date}
mkdir -p ${archive}
mv geo_em.d0?.nc met_em*nc ${archive}
cp namelist.wps geogrid/GEOGRID.TBL.HIRES ${archive}
rm -fr FILE* PRES* TAVGSFC GRIBFILE* metgrid.log.*

Basic usage

Organization of the source code

After download and unpacking, the WRF source code looks like this

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[WRF-4.4.2]$ ls
total 236K
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 arch
drwxr-xr-x.  3 serafin users 8,0K 19 dic 18.37 chem
-rwxr-xr-x.  1 serafin users 4,0K 19 dic 18.37 clean
-rwxr-xr-x.  1 serafin users  17K 19 dic 18.37 compile
-rwxr-xr-x.  1 serafin users  37K 19 dic 18.37 configure
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 doc
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 dyn_em
drwxr-xr-x. 17 serafin users 4,0K 19 dic 18.37 external
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 frame
drwxr-xr-x. 16 serafin users 4,0K 19 dic 18.37 hydro
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 inc
-rw-r--r--.  1 serafin users 1,1K 19 dic 18.37 LICENSE.txt
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 main
-rw-r--r--.  1 serafin users  57K 19 dic 18.37 Makefile
drwxr-xr-x.  3 serafin users 8,0K 19 dic 18.37 phys
-rw-r--r--.  1 serafin users  18K 19 dic 18.37 README
-rw-r--r--.  1 serafin users 1,2K 19 dic 18.37 README.md
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 Registry
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 run
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 share
drwxr-xr-x. 17 serafin users 4,0K 19 dic 18.37 test
drwxr-xr-x.  4 serafin users 4,0K 19 dic 18.37 tools
drwxr-xr-x. 14 serafin users 4,0K 19 dic 18.37 var
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 wrftladj

Knowing in detail the structure of the source code is not necessary for the average user. However, the directories where most of the practical work is done are:

  • run: this is where the compiled executables and lookup tables will reside after compilation.
  • test: this contains several subdirectories, each of which refers to a specific compilation mode. For instance, compiling WRF for large-eddy simulation will link some executables in em_les, while compiling WRF for real-case simulations will link some other executables and lookup tables in em_real. Most of the test subdirectories refer to simple idealized simulations, some of which are two-dimensional. These test cases are used to valide the model's dynamical core (e.g., check if it correctly reproduces analytical solution of the Euler or Navier-Stokes equations).

In some cases, editing the model source code is necessary. This mostly happens in these directories:

  • dyn_em: this contains the source code of the dynamical core of the model ("model dynamics") and of part of the initialization programmes.
  • phys: this contains the source code of parameterizion schemes ("model physics").
  • Registry: large chunks of the WRF source code are generated automatically at compile time, based on the information contained in a text file called Registry. This file specifies for instance what model variables are saved in the output, and how.

Compiling the model

WRF is written in compiled languages (mostly Fortran and C++), so it needs to be compiled before execution. It relies on external software libraries at compilation and runtime, so these libraries have to be available on the system where WRF runs.

In general, compiled WRF versions are already available on all of our servers (SRVX1, JET, VSC4, VSC5) from the expert users. So, the easiest way of getting started is to copy a compiled version of the code from them (see below).

However, we describe the typical workflow of the compilation, for anyone that wishes to try it out. There are three steps: (i) make libraries available, (ii) configure, (iii) compile.

Make the prerequisite libraries available

In most cases, precompiled libraries can be made available to the operating system using environment modules. Environment modules modify the Linux shell environment so that the operating system is aware of where to find specific executable files, include files, software libraries, documentation files. Each server has its own set of available modules. As of 1.3.2023, WRF is known to compile and run with the following module collections.

SRVX1:

Bash
1
2
3
4
5
module load intel-parallel-studio/composer.2020.4-intel-20.0.4 \
    openmpi/3.1.6-intel-20.0.4 \
    hdf5/1.10.7-intel-20.0.4-MPI3.1.6 \
    netcdf-c/4.6.3-intel-20.0.4-MPI3.1.6 \
    netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6

SRVX1 (modules changed; 25.04.2023):

Bash
1
2
3
4
5
module load intel-oneapi-compilers/2021.4.0 \
    intel-oneapi-mpi/2021.7.1-intel-2021.4.0 \
    hdf5/1.12.2-intel-2021.4.0 \
    netcdf-c/4.7.4-intel-2021.4.0 \
    netcdf-fortran/4.5.3-intel-2021.4.0

JET (GNU Fortran compiler):

Bash
1
2
3
4
5
6
module load openmpi/4.0.5-gcc-8.5.0-ryfwodt \
    hdf5/1.10.7-gcc-8.5.0-t247okg \
    parallel-netcdf/1.12.2-gcc-8.5.0-zwftkwr \
    netcdf-c/4.7.4-gcc-8.5.0-o7ahi5o \
    netcdf-fortran/4.5.3-gcc-8.5.0-3bqsedn \
    gcc/8.5.0-gcc-8.5rhel8-7ka2e42

JET (Intel Fortran compiler):

Bash
1
2
3
4
5
6
module load intel-parallel-studio/composer.2020.2-intel-20.0.2-zuot22y zlib/1.2.11-intel-20.0.2-3h374ov \
    openmpi/4.0.5-intel-20.0.2-4wfaaz4 \
    hdf5/1.12.0-intel-20.0.2-ezeotzr \
    parallel-netcdf/1.12.1-intel-20.0.2-sgz3yqs \
    netcdf-c/4.7.4-intel-20.0.2-337uqtc \
    netcdf-fortran/4.5.3-intel-20.0.2-irdm5gq

JET (alternative setup with Intel Fortran compiler):

Bash
1
2
3
4
5
6
module load intel-oneapi-mpi/2021.4.0-intel-2021.4.0-eoone6i \
    hdf5/1.10.7-intel-2021.4.0-n7frjgz \
    parallel-netcdf/1.12.2-intel-2021.4.0-bykumdv \
    netcdf-c/4.7.4-intel-2021.4.0-vvk6sk5 \
    netcdf-fortran/4.5.3-intel-2021.4.0-pii33is \
    intel-oneapi-compilers/2021.4.0-gcc-9.1.0-x5kx6di

JET (modules changed; 11.04.2023):

Bash
1
2
3
4
5
module load intel-oneapi-compilers/2022.2.1-zkofgc5 \
    hdf5/1.12.2-intel-2021.7.1-w5sw2dq \
    netcdf-fortran/4.5.3-intel-2021.7.1-27ldrnt \
    netcdf-c/4.7.4-intel-2021.7.1-lnfs5zz \
    intel-oneapi-mpi/2021.7.1-intel-2021.7.1-pt3unoz

VSC4:

Bash
1
2
3
4
5
6
7
module load pkgconf/1.8.0-intel-2021.5.0-bkuyrr7 \
    intel-oneapi-compilers/2022.1.0-gcc-8.5.0-kiyqwf7 \
    intel-oneapi-mpi/2021.6.0-intel-2021.5.0-wpt4y32 \
    zlib/1.2.12-intel-2021.5.0-pctnhmb \
    hdf5/1.12.2-intel-2021.5.0-loke5pd \
    netcdf-c/4.8.1-intel-2021.5.0-hmrqrz2 \
    netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy

Load modules with module load LIST-OF-MODULE-NAMES, unload them one by one with module unload LIST-OF-MODULE-NAMES, unload all of them at the same time with module purge, get information about a specific module with module show MODULE_NAME. Modules may depend on each other. If the system is set up properly, a request to load one module will automatically load any other prerequisite ones.

After loading modules, it is also recommended to set the NETCDF environment variable to the root variable of the netcdf installation. On srvx1, jet and VSC4, use module show to see which directory is correct. For instance:

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
(skylake) [serafins@l46 TEAMx_real]$ module list
Currently Loaded Modulefiles:
1) pkgconf/1.8.0-intel-2021.5.0-bkuyrr7                4) zlib/1.2.12-intel-2021.5.0-pctnhmb      7) netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy
2) intel-oneapi-compilers/2022.1.0-gcc-8.5.0-kiyqwf7   5) hdf5/1.12.2-intel-2021.5.0-loke5pd
3) intel-oneapi-mpi/2021.6.0-intel-2021.5.0-wpt4y32    6) netcdf-c/4.8.1-intel-2021.5.0-hmrqrz2
(skylake) [serafins@l46 TEAMx_real]$ module show netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy
-------------------------------------------------------------------
/opt/sw/spack-0.19.0/var/spack/environments/skylake/modules/linux-almalinux8-skylake/netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy:

module-whatis   {NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. This is the Fortran distribution.}
prepend-path    PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/bin
prepend-path    LIBRARY_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib
prepend-path    LD_LIBRARY_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib
prepend-path    CPATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/include
prepend-path    MANPATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/share/man
prepend-path    PKG_CONFIG_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib/pkgconfig
prepend-path    CMAKE_PREFIX_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/
-------------------------------------------------------------------
(skylake) [serafins@l46 TEAMx_real]$ export NETCDF=/gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj
(skylake) [serafins@l46 TEAMx_real]$ env|grep NETCDF
NETCDF=/gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj

On VSC5 do not use module, but spack:

Bash
1
2
spack load intel-oneapi-compilers
spack load netcdf-fortran@4.4.5%intel

To check the library paths of loaded modules:

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
(zen3) [serafins@l51 ~]$ spack find --loaded --paths
==> In environment zen3
...
==> 8 loaded packages
-- linux-almalinux8-zen2 / intel@2021.5.0 -----------------------
hdf5@1.10.5                /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/hdf5-1.10.5-tty2baooecmvy5vhfhyt5uc3bj46cwpl
intel-oneapi-mpi@2021.4.0  /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/intel-oneapi-mpi-2021.4.  0-jjcwtufcblofydeg2s3vm7fjb3qsezpf
netcdf-c@4.7.0             /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-c-4.7.0-spzlhyrfnqcl53ji25zop2adp222ftq4
netcdf-fortran@4.4.5       /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4
numactl@2.0.14             /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/numactl-2.0.14-beunpggnwwluwk7svx6zkjohv2ueayei
pkgconf@1.8.0              /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/pkgconf-1.8.0-ig5i4nqzqldjasgmkowp5ttfevdb4bnr
zlib@1.2.11                /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/zlib-1.2.11-6lzwo7c5o3db2q7hcznhzr6k3klh7wok

-- linux-almalinux8-zen3 / gcc@11.2.0 ---------------------------
intel-oneapi-compilers@2022.0.2  /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-compilers-2022.0.2-yzi4tsud2tqh4s6ykg2ulr7pp7guyiej
(zen3) [serafins@l51 ~]$ export NETCDF=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4
(zen3) [serafins@l51 ~]$ env|grep NETCDF
NETCDF=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4

Important note: The environment must be consistent between compilation and runtime. If you compile WRF with a set of modules loaded, you must run it with the same set of modules.

Configure WRF for compilation

This will test the system to check that all libraries can be properly linked. Type ./configure, pick a generic dmpar INTEL (ifort/icc) configuration (usually 15), answer 1 when asked if you want to compile for nesting, then hit enter. "dmpar" means "distributed memory parallelization" and enables running WRF in parallel computing mode. For test compilations or for a toy setup, you might also choose a "serial" configuration.

If all goes well, the configuration will end will a message like this:

Text Only
1
2
3
*****************************************************************************
This build of WRF will use NETCDF4 with HDF5 compression
*****************************************************************************

But the configuration could also end with a message like this (it happens for instance on srvx1):

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
************************** W A R N I N G ************************************
NETCDF4 IO features are requested, but this installation of NetCDF
  /home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23
DOES NOT support these IO features.

Please make sure NETCDF version is 4.1.3 or later and was built with
 --enable-netcdf4

OR set NETCDF_classic variable
  bash/ksh : export NETCDF_classic=1
       csh : setenv NETCDF_classic 1

Then re-run this configure script

!!! configure.wrf has been REMOVED !!!

*****************************************************************************

This is actually a misleading error message. The problem has nothing to do with NETCDF4 not being available, but with the operating system not detecting correctly all the dependencies of the NETCDF libraries. Solving this problem requires manually editing the configuration files (see below).

The configure script stores the model configuration to a file called configure.wrf. This is specific to the source code version, to the server where the source code is compiled, and to the software environment. If you a have a working configure.wrf file for a given source code/server/environment, back it up.

To solve the NETCDF4 error on srvx1: first, run configure and interrupt the process (Ctrl+C) before it raises the NetCDF warning; so, configure.wrf will not be deleted. Then, make the following changes to the automatically-generated configure.wrf:

Bash
1
2
3
4
5
6
7
8
9
(base) [serafin@srvx1 WRF]$ diff configure.wrf configure.wrf.dmpar
98c98
< DEP_LIB_PATH    = -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-c-4.6.3-5netrylc3im76bqg4vlo2ck4qd3jmrdt/lib
---
> DEP_LIB_PATH    = -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-c-4.6.3-5netrylc3im76bqg4vlo2ck4qd3jmrdt/lib -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/hdf5-1.10.7-nj3ahzinbfiwe5tnteupbfmx4empgh6l/lib
122c122
<                       -L$(WRF_SRC_ROOT_DIR)/external/io_netcdf -lwrfio_nf -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23/lib -lnetcdff
---
>                       -L$(WRF_SRC_ROOT_DIR)/external/io_netcdf -lwrfio_nf -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23/lib -lhdf5 -lnetcdff -lnetcdf

The first file, configure.wrf, is the result of the (wrong) automatic configuration. The second file, configure.wrf.dmpar is the manually fixed one. In the latter, additional library link directives (-lnetcdf and -lhdf5) are added to the variable LIB_EXTERNAL, and the full paths to these extra libraries are added to the variable DEP_LIB_PATH.

Compile WRF

You always compile WRF for a specific model configuration. The ones we use most commonly are em_les (for large-eddy simulation), em_quarter_ss (for idealized mesoscale simulations), em_real (for real-case forecasts). So type either of the following, depending on what you want to get:

Bash
1
2
3
./compile em_les > compile.log 2>&1 &
./compile em_quarter_ss > compile.log 2>&1 &
./compile em_real > compile.log 2>&1 &

The > compile.log tells the operating system to redirect the output stream from the terminal to a file called compile.log. The 2>&1 tells the operating system to merge the standard and error output streams, so compile.log will contain both regular output and error messages. The final & tells the operating system to run the job in the background, and returns to the terminal prompt.

The compiled code will be created in the run directory, and some of the compiled programs will be linked in either of the test/em_les, test/em_quarter_ss or test/em_real directories. Executable WRF files typically have names ending with .exe (this is just conventional; it is actually not necessary for them to run).

Compilation may take half an hour or so. A successful compilation ends with:

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
==========================================================================
build started:   mer 19 ott 2022, 16.17.36, CEST
build completed: mer 19 ott 2022, 16.51.46, CEST

--->                  Executables successfully built                  <---

-rwxr-xr-x 1 serafin users 51042008 19 ott 16.51 main/ideal.exe
-rwxr-xr-x 1 serafin users 57078208 19 ott 16.51 main/wrf.exe

==========================================================================

If instead you get this:

Bash
1
2
3
4
5
6
7
==========================================================================
build started:   Thu Feb  2 16:30:55 CET 2023
build completed: Thu Feb 2 17:07:04 CET 2023

---> Problems building executables, look for errors in the build log  <---

==========================================================================

then you have a problem, and there is no unique solution. Take a closer look at compile.log and you might be able to diagnose it.

Copying compiled WRF code

Running WRF in a software container

Running an idealized simulation

Running a real-case simulation

Output and restart files

incl. how to modify output paths

Suggested workflow

Analysing model output

Things to remember:

  • staggered grid (Arakawa-C)
  • mass-based vertical coordinate (level height AGL is time-dependent)
  • terrain-following coordinate system (curvilinear)
  • in the model output, some variables are split into base state + perturbation

Python interface to WRF

Example of a very basic Python class to create an object from a WRF run, initialized with only some basic information:

Python
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class wrfrun:
    def __init__(self, filename):
        self.filename = filename
        self.nc = netCDF4.Dataset(filename)
        self.dx = self.nc.DX
        self.dy = self.nc.DY
        self.nx = self.nc.dimensions['west_east'].size
        self.ny = self.nc.dimensions['south_north'].size
        self.x = np.arange(0,self.nx*self.dx,self.dx)
        self.y = np.arange(0,self.ny*self.dy,self.dy)
        self.valid_times = self.nc['XTIME'][:]*60
        self.current_time = 0

    def set_time(self,step):
        self.current_time = step

    def add_wind(self):
        udum = self.nc['U'][self.current_time,:,:,:]
        vdum = self.nc['V'][self.current_time,:,:,:]
        wdum = self.nc['W'][self.current_time,:,:,:]
        self.u = 0.5*(udum[:,:,:-1]+udum[:,:,1:])
        self.v = 0.5*(vdum[:,:-1,:]+vdum[:,1:,:])
        self.w = 0.5*(wdum[:-1,:,:]+wdum[1:,:,:])
        del udum,vdum,wdum

The last function adds 3D wind variables at a specific time, after destaggering.

The wrfrun class is then used as follows:

Python
1
2
3
wrf = wrfrun("./wrfout_d01_0001-01-01_00:00:00")
wrf.set_time(36)
wrf.add_wind()

Variables are then accessible as wrf.u, wrf.v etc.

Important namelist settings

Advanced usage

Changing the source code

Conditional compilation

Most Fortran compilers allow passing the source code through a C preprocessor (CPP; sometimes also called the Fortran preprocessor, FPP) to allow for conditional compilation. In the C programming language, there are some directives that make it possible to compile portions of the source code selectively.

In the WRF source code, Fortran files have an .F extension. cpp will parse these files and create corresponding .f90 files. The .f90 files will then be compiled by the Fortran compiler.

This means:

  1. When editing the source code, always work on the .F files, otherwise changes will be lost on the next compilation.
  2. In the .F files, it is possible to include #ifdef and #ifndef directives for conditional compilation.

For instance, in dyn_em/module_initialize_ideal.F, the following bits of code define the model orography for idealized large-eddy simulation runs. Four possibilities are given: MTN, EW_RIDGE, NS_RIDGE, and NS_VALLEY. If none is selected at compile time (select by adding ! in front of #ifdef and #endif), none of these code lines is compiled and grid%ht(i,j) (the model orography) is set to 0:

Fortran
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#ifdef MTN
  DO j=max(ys,jds),min(ye,jde-1)
  DO i=max(xs,ids),min(xe,ide-1)
     grid%ht(i,j) = mtn_ht * 0.25 * &
               ( 1. + COS ( 2*pi/(xe-xs) * ( i-xs ) + pi ) ) * &
               ( 1. + COS ( 2*pi/(ye-ys) * ( j-ys ) + pi ) )
  ENDDO
  ENDDO
#endif
#ifdef EW_RIDGE
  DO j=max(ys,jds),min(ye,jde-1)
  DO i=ids,ide
     grid%ht(i,j) = mtn_ht * 0.50 * &
               ( 1. + COS ( 2*pi/(ye-ys) * ( j-ys ) + pi ) )
  ENDDO
  ENDDO
#endif
#ifdef NS_RIDGE
  DO j=jds,jde
  DO i=max(xs,ids),min(xe,ide-1)
     grid%ht(i,j) = mtn_ht * 0.50 * &
               ( 1. + COS ( 2*pi/(xe-xs) * ( i-xs ) + pi ) )
  ENDDO
  ENDDO
#endif
#ifdef NS_VALLEY
  DO i=ids,ide
  DO j=jds,jde
     grid%ht(i,j) = mtn_ht
  ENDDO
  ENDDO
  xs=ids   !-1
  xe=xs + 20000./config_flags%dx
  DO j=jds,jde
  DO i=max(xs,ids),min(xe,ide-1)
     grid%ht(i,j) = mtn_ht - mtn_ht * 0.50 * &
               ( 1. + COS ( 2*pi/(xe-xs) * ( i-xs ) + pi ) )
  ENDDO
  ENDDO
#endif

To control conditional compilation:

  1. Search for the variable ARCHFLAGS in configure.wrf
  2. Add the desired define statement at the bottom. For instance, to selectively compile the NS_VALLEY block above, do the following:
Makefile
1
2
3
4
5
6
7
ARCHFLAGS       =    $(COREDEFS) -DIWORDSIZE=$(IWORDSIZE) -DDWORDSIZE=$(DWORDSIZE) -DRWORDSIZE=$(RWORDSIZE) -DLWORDSIZE=$(LWORDSIZE) \
                     $(ARCH_LOCAL) \
                     $(DA_ARCHFLAGS) \
                      -DDM_PARALLEL \
...
                      -DNMM_NEST=$(WRF_NMM_NEST) \
                      -DNS_VALLEY

Customizing model output

Adding namelist variables

Running offline nested simulations

Running LES with online computation of resolved-fluxes turbulent fluxes

WRFlux

Data assimilation (DA)

Observation nudging

Variational DA

WRFDA

Ensemble DA

We cover this separately. See DART-WRF.

Specific tasks

Before running the model

Defining the vertical grid

Customizing model orography

Defining a new geographical database

Using ECMWF data as IC/BC

The long story made short is: you should link grib1 files and process them with ungrib.exe using Vtable.ECMWF_sigma.

More in detail, since a few years ECMWF has been distributing a mixture of grib2 and grib1 files. Namely:

  • grib1 files for surface and soil model levels.
  • grib2 files for atmospheric model levels.

The WPS has a predefined Vtable for grib1 files from ECMWF, so the easiest way to process ECMWF data is to:

  1. convert model-level grib2 files to grib1
  2. if necessary, for every time stamp, concatenate the model-level and surface grib1 files into a single file. This is only necessary if the grib1 and grib2 data were downloaded as separate sets of GRIB files.
  3. process the resulting files with ungrib after linking ungrib/Variable_Tables/Vtable.ECMWF_sigma as Vtable

In detail:

  1. Conversion to grib1 (needs the grib_set utility from eccodes):
convert to grib1
1
2
3
4
5
for i in det.CROSSINN.mlv.20190913.0000.f*.grib2;
do
  j=`basename $i .grib2`;
  grib_set -s deletePV=1,edition=1 ${i} ${j};
done
  1. Concatenation of grib files (two sets of files, *mlv* and *sfc*, with names ending with "grib1" yield a new set of files with names ending with "grib"; everything is grib1):
concatenate grib files
1
2
3
4
5
6
for i in det.CROSSINN.mlv.20190913.0000.f*.grib1;
do
  j=`echo $i|sed 's/.mlv./.sfc./'`;
  k=`echo $i|sed 's/.mlv././'|sed 's/.grib1/.grib/'`;
  cat $i $j > $k;
done
  1. In the WPS main directory:
link grib files and convert
1
2
3
link_grib.csh /data/GRIB_IC_for_LAM/ECMWF/20190913_CROSSINN_IOP8/det.CROSSINN.20190913.0000.f*.grib
ln -s ungrib/Variable_Tables/Vtable.ECMWF_sigma Vtable
./ungrib.exe

An alternative procedure would be to convert everything to grib2 instead of grib1. Then, one has to use a Vtable with grib2 information for the surface fields, for instance the one included here at the bottom. But: Data from the bottom soil level will not be read correctly with this Vtable, because the Level2 value for the bottom level is actually MISSING in grib2 files (at the moment of writing, 6 May 2022; this may be fixed in the future).

Text Only
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
GRIB1| Level| From |  To  | metgrid  | metgrid  | metgrid                                  |GRIB2|GRIB2|GRIB2|GRIB2|
Param| Type |Level1|Level2| Name     | Units    | Description                              |Discp|Catgy|Param|Level|
-----+------+------+------+----------+----------+------------------------------------------+-----------------------+
 130 | 109  |   *  |      | TT       | K        | Temperature                              |  0  |  0  |  0  | 105 |
 131 | 109  |   *  |      | UU       | m s-1    | U                                        |  0  |  2  |  2  | 105 |
 132 | 109  |   *  |      | VV       | m s-1    | V                                        |  0  |  2  |  3  | 105 |
 133 | 109  |   *  |      | SPECHUMD | kg kg-1  | Specific humidity                        |  0  |  1  |  0  | 105 |
 152 | 109  |   *  |      | LOGSFP   | Pa       | Log surface pressure                     |  0  |  3  |  25 | 105 |
 129 | 109  |   *  |      | SOILGEO  | m        | Surface geopotential                     |  0  |  3  |  4  |  1  |
     | 109  |   *  |      | SOILHGT  | m        | Terrain field of source analysis         |  0  |  3  |  5  |  1  |
 134 | 109  |   1  |      | PSFCH    | Pa       |                                          |  0  |  3  |  0  |  1  |
 157 | 109  |   *  |      | RH       | %        | Relative Humidity                        |  0  |  1  |  1  | 105 |
 165 |  1   |   0  |      | UU       | m s-1    | U                                        |  0  |  2  |  2  | 103 |
 166 |  1   |   0  |      | VV       | m s-1    | V                                        |  0  |  2  |  3  | 103 |
 167 |  1   |   0  |      | TT       | K        | Temperature                              |  0  |  0  |  0  | 103 |
 168 |  1   |   0  |      | DEWPT    | K        |                                          |  0  |  0  |  6  | 103 |
 172 |  1   |   0  |      | LANDSEA  | 0/1 Flag | Land/Sea flag                            |  2  |  0  |  0  |  1  |
 151 |  1   |   0  |      | PMSL     | Pa       | Sea-level Pressure                       |  0  |  3  |  0  | 101 |
 235 |  1   |   0  |      | SKINTEMP | K        | Sea-Surface Temperature                  |  0  |  0  | 17  |  1  |
  34 |  1   |   0  |      | SST      | K        | Sea-Surface Temperature                  |  10 |  3  |  0  |  1  |
 139 | 112  |     0|   700| ST000007 | K        | T of 0-7 cm ground layer                 | 192 | 128 | 139 | 106 |
 170 | 112  |   700|  2800| ST007028 | K        | T of 7-28 cm ground layer                | 192 | 128 | 170 | 106 |
 183 | 112  |  2800| 10000| ST028100 | K        | T of 28-100 cm ground layer              | 192 | 128 | 183 | 106 |
 236 | 112  | 10000|     0| ST100289 | K        | T of 100-289 cm ground layer             | 192 | 128 | 236 | 106 |
  39 | 112  |     0|   700| SM000007 | fraction | Soil moisture of 0-7 cm ground layer     | 192 | 128 |  39 | 106 |
  40 | 112  |   700|  2800| SM007028 | fraction | Soil moisture of 7-28 cm ground layer    | 192 | 128 |  40 | 106 |
  41 | 112  |  2800| 10000| SM028100 | fraction | Soil moisture of 28-100 cm ground layer  | 192 | 128 |  41 | 106 |
  42 | 112  | 10000|     0| SM100289 | fraction | Soil moisture of 100-289 cm ground layer | 192 | 128 |  42 | 106 |
-----+------+------+------+----------+----------+------------------------------------------+-----------------------+

Spinning up soil fields

After running the model

Converting model output to CF-compliant NetCDF

  1. To convert WRF output to CF-compliant NetCDF, use wrfout_to_cf.ncl (from https://sundowner.colorado.edu/wrfout_to_cf/overview.html):
Text Only
1
ncl 'file_in="wrfinput_d01"' 'file_out="wrfpost.nc"' wrfout_to_cf.ncl

Interpolating model output to a new grid

  1. First convert to CF-compliant NetCDF (see above)

  2. Then use cdo to interpolate the CF-compliant WRF output:

    Text Only
    1
    cdo -remapnn,gridfile.lonlat.txt wrfpost.nc wrfpost_interpolated.nc
    
  3. In the code snippet above, -remapnn specifies the interpolation engine, in this case nearest-neighbour. See alternatives here: https://code.mpimet.mpg.de/projects/cdo/wiki/Tutorial#Horizontal-fields

  4. File gridfile.lonlat.txt contans the grid specifications, e.g.:
    Text Only
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    gridtype  = lonlat
    gridsize  = 721801
    xsize     = 1201
    ysize     = 601
    xname     = lon
    xlongname = "longitude"
    xunits    = "degrees_east"
    yname     = lat
    ylongname = "latitude"
    yunits    = "degrees_north"
    xfirst    = 5.00
    xinc      = 0.01
    yfirst    = 43.00
    yinc      = 0.01
    

Subsetting model output

Further compression of model output (data packing)

3D visualization

For 3D visualization of WRF output, it is recommended to use either Paraview or Mayavi.

  • Both softwares are based on the Visualization Toolkit (VTK) libraries, so visualizations are rather similar in the end.

  • Both sotwares can be used interactively from a graphical user interface or in batch mode (i.e., writing the visualization directives in a Python script).

  • While Paraview requires converting model data into one of a few supported formats, Mayavi supports direct rendering of Numpy objects, so it is easier to integrate it into Python code.

  • It is recommended to run 3D visualization software on GPUs. Running on a CPU (e.g., own laptop) is possible, but will be extremely slow. CPU is not the only bottleneck, because visualization software uses a lot of computer memory. Rendering 3D fields, in particular, is out of reach for normal laptops with 8GB or 16GB of RAM. Paraview is available on VSC5 and should be available soon on srvx8. Currently, Mayavi must be installed by individual users as a Python package.

Notes for readers/contributors: (1) Mayavi is untested yet. (2) It would be useful to add example batch scripts for both Paraview and Mayavi.

Paraview workflow
  1. Pre-requisite: download and install the Paraview application on your computer.

  2. Log in to VSC5 in a terminal window.

  3. On VSC5, convert the WRF output in a format that Paraview can ingest. One option is to use siso.

Bash
1
2
siso -f vts ~/serafins/TEAMx_LES/output/100m/wrfout_d01_0001-01-01_00\:00\:00 > siso.out 2>&1 &
siso -f vts --planar ~/serafins/TEAMx_LES/output/100m/wrfout_d01_0001-01-01_00\:00\:00 > siso_sfc.out 2>&1 &

The first and second statements handle respectively 3D and 2D WRF output. They process the native output from WRF in netcdf format and return collections of files in VTS format (the VTK format for structured grids). There will be two independent datasets (for 3D and 2D output).

  1. In the VSC5 terminal, request access to a GPU node. One of the private IMGW nodes has a GPU, and can be accessed with specific account/partition/quality of service directives.
    Bash
    1
    2
    3
    4
    5
    6
    7
    (zen3) [sserafin4@l50 ~]$ salloc -N 1 --gres=gpu:2 --account=p71386 -p zen3_0512_a100x2 -q p71386_a100dual
    salloc: Pending job allocation 233600
    salloc: job 233600 queued and waiting for resources
    salloc: job 233600 has been allocated resources
    salloc: Granted job allocation 233600
    salloc: Waiting for resource configuration
    salloc: Nodes n3072-006 are ready for job
    
  2. Once the GPU node becomes available, open up a new terminal session on your local machine, and set up an ssh tunnel to the GPU node through the login node.
Bash
1
(mypy39) stefano@stefano-XPS-13-9370:~$ ssh -L 11111:n3072-006:11112 sserafin4@vsc5.vsc.ac.at

This will redirect TCP/IP traffic from port 11111 of your local machine to port 11112 of the VSC5 GPU node, through the VSC5 login node. Port numbers are arbitary, but the remote port (11112) needs to match the Paraview server settings (see below).

  1. In the VSC5 terminal, log in to the GPU node:
Bash
1
2
3
4
5
(zen3) [sserafin4@l50 ~]$ ssh n3072-006
Warning: Permanently added 'n3072-006,10.191.72.6' (ECDSA) to the list of known hosts.
sserafin4@n3072-006's password:

(zen3) [sserafin4@n3072-006 ~]$
  1. In the VSC5 terminal on the GPU node, load the Paraview module and start the Paraview server:
Bash
1
2
3
4
5
(zen3) [sserafin4@n3072-006 ~]$ module load paraview
(zen3) [sserafin4@n3072-006 ~]$ pvserver --force-offscreen-rendering --server-port=11112
Waiting for client...
Connection URL: cs://n3072-006:11112
Accepting connection(s): n3072-006:11112
  1. On your local machine, open the Paraview client (graphical user interface, GUI). Then select File > Connect and enter the url of the Paraview server (localhost:11111). Select the datasets you want to display and work on them in the GUI. Save the Paraview state to avoid repeating work at the next session. Paraview has extensive documentation, tutorials (one, two and three) and a wiki.
Mayavi workflow

Not tested yet.

Creating a video

Whether done with Paraview or with Mayavi, the visualization will result in a collection of png files, e.g., InnValley.%04d.png. There are several tools to convert invidual frames into movies. Among them, ffmpeg and apngasm. At the moment neither of them is available on IMGW servers (precompiled binaries are available through apt-get for Ubuntu).

The basic method to create an mp4 movie is:

Bash
1
ffmpeg -i InnValley.%04d.png -c:v libx264 -r 12 -pix_fmt yuv420p InnValley.mp4

The method above might return an error if frames have an odd number of pixels in one dimension:

Bash
1
[libx264 @ 0x5651e5f02980] height not divisible by 2 (1066x1083)

The fix is as follows:

Bash
1
ffmpeg -i InnValley.%04d.png -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -r 12 -pix_fmt yuv420p InnValley.mp4

It is possible to add movie repetitions (similar to a loop). In this case, 3 additional loops are appended after the first one:

Bash
1
ffmpeg -stream_loop 3 -framerate 12 -i InnValley.%04d.png -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -pix_fmt yuv420p InnValley.mp4

It also possible to generate movies in other formats, better suited for the web:

  • webp (most efficient compression for loops):
Bash
1
ffmpeg -framerate 12 -i InnValley.%04d.png InnValley.webp
  • animated png (bigger in size):
Bash
1
apngasm InnValley.png InnValley.0*png
  • gif (much bigger in size):
Bash
1
ffmpeg -framerate 12 -i InnValley.%04d.png InnValley.gif
  • For the example dataset, the collection of raw png files takes 59 MB while the video file sizes range between 4.5 and 70 MB:
Bash
1
2
3
4
5
6
7
8
(mypy39) stefano@stefano-XPS-13-9370:~/Desktop/Paraview_animation/anim$ du -hcs InnValley.0*png
59M   total

(mypy39) stefano@stefano-XPS-13-9370:~/Desktop/Paraview_animation/anim$ du -hcs InnValley.[pgmw]*
70M   InnValley.gif
14M   InnValley.mp4
51M   InnValley.png
4,5M  InnValley.webp

Useful tools


Last update: February 1, 2024
Created: February 28, 2023