521.wrf_r
The Weather Research and Forecasting Model (WRF) is maintained by a collaborative partnership, principally among the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (the National Centers for Environmental Prediction (NCEP) and the Forecast Systems Laboratory (FSL), the Air Force Weather Agency (AFWA), the Naval Research Laboratory, the University of Oklahoma, and the Federal Aviation Administration (FAA). The list of current development teams can be found at http://www.wrf-model.org/development/development.php.
Weather Research and Forecasting.
521.wrf_r is based on Version 3.6.1 of the Weather Research and Forecasting Model (WRF) available from http://www.wrf-model.org/index.php. From the WRF Home page:
The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. It features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers.
The input dataset to WRF covers the January 2000 North American Blizzard, beginning at midnight GMT on 24 January, 2000 and running for 1 simulated day. This single non-nested WRF domain is a grid 74 by 61 cells over the Eastern United States and areas of the Atlantic along the eastern seaboard at a horizontal resolution of 30km (horizontal dimension of a grid cell). There are 28 vertical levels. The time step is 60 seconds. The model will generate history output at the beginning of the run and then every 3 simulated hours.
Inputs for both throughput (rate) and speed tests are the same. The only differences are that the throughput test runs for 10 timesteps, and the speed test runs for 60 timesteps. For the speed test, OpenMP may be used to distribute the increased work over multiple threads of execution.
To validate the forecast, SPEC uses the WRF 'diffwrf' utility, which is included in the src/ directory for the benchmark , and which is built at the same time that the main executable is built. The flow is:
If a field does not match what diffwrf expects, it writes a line such as this one:
Field Ndifs Tol RMS (1) RMS (2) DIGITS RMSE pntwise max V10 4380 2 0.6537097127E+01 0.6447425267E+01 1 0.3738E+00 0.1498E+00
In the above, diffwrf reports that of all the V10 values computed by the benchmark, there were 4380 that were not an exact match for the expected value. Diffwrf computes the RMS (root-mean-square) for the expected values (column 4) and the benchmark-computed values (column 5). Column 3 is the allowed tolerance when comparing these RMS values and column 6 is the tolerance that would be needed if this field were to be considered to have passed. These tolerances can be thought of as - roughly - the number of digits that are expected to match. More precisely, they are computed as log10( 1.0 / (abs(rms1-rms2)/rms2)). Column 7 is the sum of the errors between the between the expected and the benchmark-computed values. Column 8 indicates the max error seen.
In short: the V10 field is validated loosely, with only about 2 digits expected to match for its RMS; and in this example, the benchmark matched only about 1 digit.
521.wrf_r uses both Fortran90 and C source.
No known issues.
Some calculations generate 'subnormal' numbers (wikipedia) which may cause slower operation than normal numbers on some hardware platforms. On such platforms, performance may be improved if "flush to zero on underflow" (FTZ) is enabled. During SPEC's testing, the output validated correctly whether or not FTZ was enabled.
Portability flags and a debug suggestion: Approved portability flags are included with the Example config files in $SPEC/config (or, on Windows, %SPEC%\config); and with published results at www.spec.org/cpu2017/results. If you are developing for a new platform, you can use these as a reference; and you may also find it useful to (temporarily, in a work directory) adjust the debug_level setting in namelist.input. For example, setting debug_level=1 supplements an error code (such as ierr=-1021) by adding its message text to the log (in this case: NetCDF error: Invalid dimension id or name).
The benchmark was contributed directly to SPEC by UCAR. Note: Therefore, source code references to other terms under which the program may be available are not relevant for the SPEC CPU version. It uses netcdf; for details, see SPEC CPU2017 Licenses.
Last updated: $Date: 2017-05-15 10:25:11 -0400 (Mon, 15 May 2017) $