Skip to content

WRF basic

Basic usage

Organization of the source code

After download and unpacking, the WRF source code looks like this

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
(base) [serafin@srvx1 WRF-4.4.2]$ ls
total 236K
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 arch
drwxr-xr-x.  3 serafin users 8,0K 19 dic 18.37 chem
-rwxr-xr-x.  1 serafin users 4,0K 19 dic 18.37 clean
-rwxr-xr-x.  1 serafin users  17K 19 dic 18.37 compile
-rwxr-xr-x.  1 serafin users  37K 19 dic 18.37 configure
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 doc
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 dyn_em
drwxr-xr-x. 17 serafin users 4,0K 19 dic 18.37 external
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 frame
drwxr-xr-x. 16 serafin users 4,0K 19 dic 18.37 hydro
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 inc
-rw-r--r--.  1 serafin users 1,1K 19 dic 18.37 LICENSE.txt
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 main
-rw-r--r--.  1 serafin users  57K 19 dic 18.37 Makefile
drwxr-xr-x.  3 serafin users 8,0K 19 dic 18.37 phys
-rw-r--r--.  1 serafin users  18K 19 dic 18.37 README
-rw-r--r--.  1 serafin users 1,2K 19 dic 18.37 README.md
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 Registry
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 run
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 share
drwxr-xr-x. 17 serafin users 4,0K 19 dic 18.37 test
drwxr-xr-x.  4 serafin users 4,0K 19 dic 18.37 tools
drwxr-xr-x. 14 serafin users 4,0K 19 dic 18.37 var
drwxr-xr-x.  2 serafin users 4,0K 19 dic 18.37 wrftladj

Knowing in detail the structure of the source code is not necessary for the average user. However, the directories where most of the practical work is done are:

  • run: this is where the compiled executables and lookup tables will reside after compilation.
  • test: this contains several subdirectories, each of which refers to a specific compilation mode. For instance, compiling WRF for large-eddy simulation will link some executables in em_les, while compiling WRF for real-case simulations will link some other executables and lookup tables in em_real. Most of the test subdirectories refer to simple idealized simulations, some of which are two-dimensional. These test cases are used to valide the model's dynamical core (e.g., check if it correctly reproduces analytical solution of the Euler or Navier-Stokes equations).

In some cases, editing the model source code is necessary. This mostly happens in these directories: * dyn_em: this contains the source code of the dynamical core of the model ("model dynamics") and of part of the initialization programmes. * phys: this contains the source code of parameterizion schemes ("model physics"). * Registry: large chunks of the WRF source code are generated automatically at compile time, based on the information contained in a text file called Registry. This file specifies for instance what model variables are saved in the output, and how.

Compiling the model

WRF is written in compiled languages (mostly Fortran and C++), so it needs to be compiled before execution. It relies on external software libraries at compilation and runtime, so these libraries have to be available on the system where WRF runs.

In general, compiled WRF versions are already available on all of our servers (SRVX1, JET, VSC4, VSC5) from the expert users. So, the easiest way of getting started is to copy a compiled version of the code from them (see below).

However, we describe the typical workflow of the compilation, for anyone that wishes to try it out. There are three steps: (i) make libraries available, (ii) configure, (iii) compile.

Make the prerequisite libraries available

In most cases, precompiled libraries can be made available to the operating system using environment modules. Environment modules modify the Linux shell environment so that the operating system is aware of where to find specific executable files, include files, software libraries, documentation files. Each server has its own set of available modules. As of 1.3.2023, WRF is known to compile and run with the following module collections.

SRVX1:

Bash
1
module load intel-parallel-studio/composer.2020.4-intel-20.0.4 openmpi/3.1.6-intel-20.0.4 hdf5/1.10.7-intel-20.0.4-MPI3.1.6 netcdf-c/4.6.3-intel-20.0.4-MPI3.1.6 netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6

SRVX1 (modules changed; 11.04.2023):

Bash
1
module load netcdf-fortran/4.5.3-intel-2020.4 intel-parallel-studio/composer.2020.4 netcdf-c/4.7.4-intel-2020.4 hdf5/1.12.2-intel-2020.4 intel-oneapi-mpi/2021.7.1-intel-2020.4

JET (GNU Fortran compiler):

Bash
1
module load openmpi/4.0.5-gcc-8.5.0-ryfwodt hdf5/1.10.7-gcc-8.5.0-t247okg parallel-netcdf/1.12.2-gcc-8.5.0-zwftkwr netcdf-c/4.7.4-gcc-8.5.0-o7ahi5o netcdf-fortran/4.5.3-gcc-8.5.0-3bqsedn gcc/8.5.0-gcc-8.5rhel8-7ka2e42    

JET (Intel Fortran compiler):

Bash
1
module load intel-parallel-studio/composer.2020.2-intel-20.0.2-zuot22y zlib/1.2.11-intel-20.0.2-3h374ov openmpi/4.0.5-intel-20.0.2-4wfaaz4 hdf5/1.12.0-intel-20.0.2-ezeotzr parallel-netcdf/1.12.1-intel-20.0.2-sgz3yqs netcdf-c/4.7.4-intel-20.0.2-337uqtc netcdf-fortran/4.5.3-intel-20.0.2-irdm5gq

JET (alternative setup with Intel Fortran compiler):

Bash
1
module load intel-oneapi-mpi/2021.4.0-intel-2021.4.0-eoone6i hdf5/1.10.7-intel-2021.4.0-n7frjgz parallel-netcdf/1.12.2-intel-2021.4.0-bykumdv netcdf-c/4.7.4-intel-2021.4.0-vvk6sk5 netcdf-fortran/4.5.3-intel-2021.4.0-pii33is intel-oneapi-compilers/2021.4.0-gcc-9.1.0-x5kx6di

JET (modules changed; 11.04.2023):

Bash
1
module load intel-oneapi-compilers/2022.2.1-zkofgc5 hdf5/1.12.2-intel-2021.7.1-w5sw2dq netcdf-fortran/4.5.3-intel-2021.7.1-27ldrnt netcdf-c/4.7.4-intel-2021.7.1-lnfs5zz intel-oneapi-mpi/2021.7.1-intel-2021.7.1-pt3unoz

VSC4:

Bash
1
module load pkgconf/1.8.0-intel-2021.5.0-bkuyrr7 intel-oneapi-compilers/2022.1.0-gcc-8.5.0-kiyqwf7 intel-oneapi-mpi/2021.6.0-intel-2021.5.0-wpt4y32 zlib/1.2.12-intel-2021.5.0-pctnhmb hdf5/1.12.2-intel-2021.5.0-loke5pd netcdf-c/4.8.1-intel-2021.5.0-hmrqrz2 netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy

Load modules with module load LIST-OF-MODULE-NAMES, unload them one by one with module unload LIST-OF-MODULE-NAMES, unload all of them at the same time with module purge, get information about a specific module with module show MODULE_NAME. Modules may depend on each other. If the system is set up properly, a request to load one module will automatically load any other prerequisite ones.

After loading modules, it is also recommended to set the NETCDF environment variable to the root variable of the netcdf installation. On srvx1, jet and VSC4, use module show to see which directory is correct. For instance:

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
(skylake) [serafins@l46 TEAMx_real]$ module list
Currently Loaded Modulefiles:
1) pkgconf/1.8.0-intel-2021.5.0-bkuyrr7                4) zlib/1.2.12-intel-2021.5.0-pctnhmb      7) netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy  
2) intel-oneapi-compilers/2022.1.0-gcc-8.5.0-kiyqwf7   5) hdf5/1.12.2-intel-2021.5.0-loke5pd     
3) intel-oneapi-mpi/2021.6.0-intel-2021.5.0-wpt4y32    6) netcdf-c/4.8.1-intel-2021.5.0-hmrqrz2  
(skylake) [serafins@l46 TEAMx_real]$ module show netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy
-------------------------------------------------------------------
/opt/sw/spack-0.19.0/var/spack/environments/skylake/modules/linux-almalinux8-skylake/netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy:

module-whatis   {NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. This is the Fortran distribution.}
prepend-path    PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/bin
prepend-path    LIBRARY_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib
prepend-path    LD_LIBRARY_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib
prepend-path    CPATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/include
prepend-path    MANPATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/share/man
prepend-path    PKG_CONFIG_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib/pkgconfig
prepend-path    CMAKE_PREFIX_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/
-------------------------------------------------------------------
(skylake) [serafins@l46 TEAMx_real]$ export NETCDF=/gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj
(skylake) [serafins@l46 TEAMx_real]$ env|grep NETCDF
NETCDF=/gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj

On VSC5 do not use module, but spack:

Bash
1
2
spack load intel-oneapi-compilers
spack load netcdf-fortran@4.4.5%intel

To check the library paths of loaded modules:

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
(zen3) [serafins@l51 ~]$ spack find --loaded --paths
==> In environment zen3
...
==> 8 loaded packages
-- linux-almalinux8-zen2 / intel@2021.5.0 -----------------------
hdf5@1.10.5                /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/hdf5-1.10.5-tty2baooecmvy5vhfhyt5uc3bj46cwpl
intel-oneapi-mpi@2021.4.0  /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/intel-oneapi-mpi-2021.4.  0-jjcwtufcblofydeg2s3vm7fjb3qsezpf
netcdf-c@4.7.0             /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-c-4.7.0-spzlhyrfnqcl53ji25zop2adp222ftq4
netcdf-fortran@4.4.5       /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4
numactl@2.0.14             /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/numactl-2.0.14-beunpggnwwluwk7svx6zkjohv2ueayei
pkgconf@1.8.0              /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/pkgconf-1.8.0-ig5i4nqzqldjasgmkowp5ttfevdb4bnr
zlib@1.2.11                /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/zlib-1.2.11-6lzwo7c5o3db2q7hcznhzr6k3klh7wok

-- linux-almalinux8-zen3 / gcc@11.2.0 ---------------------------
intel-oneapi-compilers@2022.0.2  /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-compilers-2022.0.2-yzi4tsud2tqh4s6ykg2ulr7pp7guyiej
(zen3) [serafins@l51 ~]$ export NETCDF=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4
(zen3) [serafins@l51 ~]$ env|grep NETCDF
NETCDF=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4

Important note: The environment must be consistent between compilation and runtime. If you compile WRF with a set of modules loaded, you must run it with the same set of modules.

Configure WRF for compilation

This will test the system to check that all libraries can be properly linked. Type ./configure, pick a generic dmpar INTEL (ifort/icc) configuration (usually 15), answer 1 when asked if you want to compile for nesting, then hit enter. "dmpar" means "distributed memory parallelization" and enables running WRF in parallel computing mode. For test compilations or for a toy setup, you might also choose a "serial" configuration.

If all goes well, the configuration will end will a message like this:

Text Only
1
2
3
*****************************************************************************
This build of WRF will use NETCDF4 with HDF5 compression
*****************************************************************************

But the configuration could also end with a message like this (it happens for instance on srvx1):

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
************************** W A R N I N G ************************************
NETCDF4 IO features are requested, but this installation of NetCDF           
  /home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23
DOES NOT support these IO features.                                          

Please make sure NETCDF version is 4.1.3 or later and was built with         
 --enable-netcdf4                                                             

OR set NETCDF_classic variable                                               
  bash/ksh : export NETCDF_classic=1                                        
       csh : setenv NETCDF_classic 1                                        

Then re-run this configure script                                            

!!! configure.wrf has been REMOVED !!!

*****************************************************************************
This is actually a misleading error message. The problem has nothing to do with NETCDF4 not being available, but with the operating system not detecting correctly all the dependencies of the NETCDF libraries. Solving this problem requires manually editing the configuration files (see below).

The configure script stores the model configuration to a file called configure.wrf. This is specific to the source code version, to the server where the source code is compiled, and to the software environment. If you a have a working configure.wrf file for a given source code/server/environment, back it up.

To solve the NETCDF4 error on srvx1: first, run configure and interrupt the process (Ctrl+C) before it raises the NetCDF warning; so, configure.wrf will not be deleted. Then, make the following changes to the automatically-generated configure.wrf:

Bash
1
2
3
4
5
6
7
8
9
(base) [serafin@srvx1 WRF]$ diff configure.wrf configure.wrf.dmpar
98c98
< DEP_LIB_PATH    = -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-c-4.6.3-5netrylc3im76bqg4vlo2ck4qd3jmrdt/lib
---
> DEP_LIB_PATH    = -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-c-4.6.3-5netrylc3im76bqg4vlo2ck4qd3jmrdt/lib -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/hdf5-1.10.7-nj3ahzinbfiwe5tnteupbfmx4empgh6l/lib
122c122
<                       -L$(WRF_SRC_ROOT_DIR)/external/io_netcdf -lwrfio_nf -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23/lib -lnetcdff      
---
>                       -L$(WRF_SRC_ROOT_DIR)/external/io_netcdf -lwrfio_nf -L/home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23/lib -lhdf5 -lnetcdff -lnetcdf
The first file, configure.wrf, is the result of the (wrong) automatic configuration. The second file, configure.wrf.dmpar is the manually fixed one. In the latter, additional library link directives (-lnetcdf and -lhdf5) are added to the variable LIB_EXTERNAL, and the full paths to these extra libraries are added to the variable DEP_LIB_PATH.

Compile WRF

You always compile WRF for a specific model configuration. The ones we use most commonly are em_les (for large-eddy simulation), em_quarter_ss (for idealized mesoscale simulations), em_real (for real-case forecasts). So type either of the following, depending on what you want to get:

Bash
1
2
3
./compile em_les > compile.log 2>&1 &
./compile em_quarter_ss > compile.log 2>&1 &
./compile em_real > compile.log 2>&1 &

The > compile.log tells the operating system to redirect the output stream from the terminal to a file called compile.log. The 2>&1 tells the operating system to merge the standard and error output streams, so compile.log will contain both regular output and error messages. The final & tells the operating system to run the job in the background, and returns to the terminal prompt.

The compiled code will be created in the run directory, and some of the compiled programs will be linked in either of the test/em_les, test/em_quarter_ss or test/em_real directories. Executable WRF files typically have names ending with .exe (this is just conventional; it is actually not necessary for them to run).

Compilation may take half an hour or so. A successful compilation ends with:

Bash
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
==========================================================================
build started:   mer 19 ott 2022, 16.17.36, CEST
build completed: mer 19 ott 2022, 16.51.46, CEST

--->                  Executables successfully built                  <---

-rwxr-xr-x 1 serafin users 51042008 19 ott 16.51 main/ideal.exe
-rwxr-xr-x 1 serafin users 57078208 19 ott 16.51 main/wrf.exe

==========================================================================

If instead you get this:

Bash
1
2
3
4
5
6
7
==========================================================================
build started:   Thu Feb  2 16:30:55 CET 2023
build completed: Thu Feb 2 17:07:04 CET 2023

---> Problems building executables, look for errors in the build log  <---

==========================================================================
then you have a problem, and there is no unique solution. Take a closer look at compile.log and you might be able to diagnose it.

Copying compiled WRF code

Running WRF in a software container

Running an idealized simulation

Running a real-case simulation

Output and restart files

incl. how to modify output paths

Suggested workflow

Analysing model output

Things to remember:

  • staggered grid (Arakawa-C)
  • mass-based vertical coordinate (level height AGL is time-dependent)
  • terrain-following coordinate system (curvilinear)
  • in the model output, some variables are split into base state + perturbation

Python interface to WRF

Example of a very basic Python class to create an object from a WRF run, initialized with only some basic information:

Python
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class wrfrun:
    def __init__(self, filename):
        self.filename = filename
        self.nc = netCDF4.Dataset(filename)
        self.dx = self.nc.DX
        self.dy = self.nc.DY
        self.nx = self.nc.dimensions['west_east'].size
        self.ny = self.nc.dimensions['south_north'].size
        self.x = np.arange(0,self.nx*self.dx,self.dx)
        self.y = np.arange(0,self.ny*self.dy,self.dy)
        self.valid_times = self.nc['XTIME'][:]*60
        self.current_time = 0

    def set_time(self,step):
        self.current_time = step

    def add_wind(self):
        udum = self.nc['U'][self.current_time,:,:,:]
        vdum = self.nc['V'][self.current_time,:,:,:]
        wdum = self.nc['W'][self.current_time,:,:,:]
        self.u = 0.5*(udum[:,:,:-1]+udum[:,:,1:])
        self.v = 0.5*(vdum[:,:-1,:]+vdum[:,1:,:])
        self.w = 0.5*(wdum[:-1,:,:]+wdum[1:,:,:])
        del udum,vdum,wdum
The last function adds 3D wind variables at a specific time, after destaggering.

The wrfrun class is then used as follows:

Python
1
2
3
wrf = wrfrun("./wrfout_d01_0001-01-01_00:00:00")
wrf.set_time(36)
wrf.add_wind()

Variables are then accessible as wrf.u, wrf.v etc.

Important namelist settings


Last update: May 5, 2023
Created: May 5, 2023