Regression testing

AMR-Wind comes with a suite of regression tests to ensure that the relevant parts of the code are working correctly, and any code modifications do not introduce errors. These tests are run nightly using an automated system and the results are published on a public dashboard. We ensure that the testing covers multiple operating systems (currently Linux and MacOS), different compilers (GCC, Intel, LLVM Clang, and NVIDIA CUDA), as well as two different versions of AMReX library – a stable monthly snapshot, and the version-of-the-day (VOTD) development branch. In addition to the nightly testing, continuous integration (CI) capability is used to test pull-requests to ensure that they pass all tests before they are merged to the mainline codebase.

Enabling and running regression tests

AMR-Wind uses CTest and its integration with CMake for the nightly regression test suite. Developers can enable and use the tests locally on their machine to assist development of new features and ensuring that the modifications do not break current capabilities. Regression testing is not enabled by default in a CMake build and must be enabled explicitly by the user. To enable regression testing, the user must enable additional flags during the CMake configure phase: AMR_WIND_ENABLE_TESTS as shown below:

# Switch to AMR-wind source code
cd ${HOME}/exawind/source/amr-wind
# Create a build directory
mkdir build-test
cd build-test
# Run configure with testing flags
cmake -DAMR_WIND_ENABLE_TESTS=ON ../
# Build code
make -j 4

Tip

The flag AMR_WIND_ENABLE_TESTS activates CTest facility and enables the use of CTest to run the regression test facility. Running ctest in this mode runs the tests but the results of the tests are not compared to any reference (i.e., gold) files. A successful test in this mode indicates that the test completed without error but it does not imply correctness. For that, the reader is referred to the section on Testing against gold files.

Upon successful build, you will notice CTest has been enabled.

# List available tests
ctest -N

# Run available tests (using 8 parallel processes)
# show output if a test fails otherwise suppress output
ctest -j 8 --output-on-failure

# Run tests with that match a particular regex
ctest -R abl_godunov --output-on-failure -j 8

# Rerun only tests that failed previously
ctest --rerun-failed --output-on-failure -j 8

Warning

Regression testing capability is only available using the CMake build process. The legacy GNUMakefile process does not support the regression testing capability via CTest. The user must manually run the cases with amr_wind and then run fcompare to determine issues.

Testing against gold files

To use fcompare to determine if the code has been built properly and generates the correct results, we must provide gold solutions for the tests. Currently, the gold files are not released with the code, so a local copy must be generated by the user. The recommended process is to download the appropriate version of AMR-Wind, build and run the executable once to generate the gold files and copy it into the appropriate directory and use them during the development process. This is not an ideal setup and will be rectified in future. We recommend running the unit tests to ensure that the build process worked correctly to generate the correct executable.

To enable regression testing with gold files, the user must enable additional flags during the CMake configure phase: AMR_WIND_ENABLE_TESTS, AMR_WIND_TEST_WITH_FCOMPARE, AMR_WIND_REFERENCE_GOLDS_DIRECTORY, AMR_WIND_SAVE_GOLDS, and AMR_WIND_SAVED_GOLDS_DIRECTORY as shown below:

# Switch to AMR-wind source code
cd ${HOME}/exawind/source/amr-wind
# Create a build directory
mkdir build-test
cd build-test
# Create the directory for the new gold files
mkdir -p golds/tmp
# Run configure with testing flags
# AMR_WIND_REFERENCE_GOLDS_DIRECTORY is the directory where the reference gold files are stored
# AMR_WIND_SAVE_GOLDS indicates that the gold files should be saved
# AMR_WIND_SAVED_GOLDS_DIRECTORY is the directory where the gold file are saved, it must exist
cmake -DAMR_WIND_ENABLE_TESTS=ON \
      -DAMR_WIND_TEST_WITH_FCOMPARE=ON \
      -DAMR_WIND_REFERENCE_GOLDS_DIRECTORY=$(pwd)/golds/current \
      -DAMR_WIND_SAVE_GOLDS:BOOL=ON \
      -DAMR_WIND_SAVED_GOLDS_DIRECTORY=$(pwd)/golds/tmp \
      ../
# Build code
make -j 4

The flag AMR_WIND_TEST_WITH_FCOMPARE uses AMReX fcompare utility to test the plotfile outputs against gold files to ensure that the results haven’t changed. Upon successful build, you will notice that a new executable fcompare is built. The gold files directory is printed out during the configure phase, as shown below:

-- AMR-Wind Information:
-- CMAKE_SYSTEM_NAME = Darwin
-- CMAKE_CXX_COMPILER_ID = AppleClang
-- CMAKE_CXX_COMPILER_VERSION = 15.0.0.15000100
-- CMAKE_BUILD_TYPE = Release
-- Test golds directory for fcompare: ${HOME}/exawind/source/amr-wind/build-test/golds/current/Darwin/AppleClang/15.0.0.15000100
-- Gold files will be saved to: ${HOME}/exawind/source/amr-wind/build-test/golds/tmp/Darwin/AppleClang/15.0.0.15000100
-- Configuring done (1.3s)
-- Generating done (0.6s)
-- Build files have been written to: ${HOME}/exawind/source/amr-wind/build-test

The gold files directory is organized by ${OS}/${COMPILER}/${COMPILER_VERSION}. The reference gold files must first be created with a reference branch of AMR-Wind, then saved in the reference gold directory:

# Ensure that you are in the build directory
# Run CTest first time (all tests will fail as there are no golds to compare with)
# The tests will fail with "amrex::Error::0::Couldn't open file:"
ctest -j 8

# Create the reference version of Golds (following the directory convention used above)
cp -R golds/tmp/ golds/current/

# Because of test dependencies, this needs to be done twice
ctest -j 8
rsync -a --delete golds/tmp/ golds/current/

Tip

The tests failing with “amrex::Error::0::Couldn’t open file:” means that the test simulation completed successfully but that no comparison was made with fcompare because the reference gold files are missing.

Once that is done (and it should only need to be done once), the test suite can be run with the following:

# Rerun CTest again and all tests should pass
ctest -j 8

Example output for a failed test

The following shows an example of a failed test and the typical output generated by fcompare that can be used for diagnostics.

❯ ctest -R abl_godunov$ --output-on-failure
Test project ~/exawind/source/amr-wind/build-test
    Start 7: abl_godunov
1/1 Test #7: abl_godunov ......................***Failed    9.73 sec

            variable name            absolute error            relative error
                                        (||A - B||)         (||A - B||/||A||)
 ----------------------------------------------------------------------------
 level = 0
 velx                               0.0009695495942           0.0001370997978
 vely                               0.0009397088188           0.0001544075933
 velz                               0.0001684407299             0.00408613285
 gpx                                5.837947396e-05            0.003916799182
 gpy                                5.947263951e-05            0.003794860517
 gpz                                5.148686593e-05           0.0001801671463
 density                                          0                         0
 tracer0                            1.591615728e-12           5.155552515e-15
 vort                               0.0001047506059            0.002168282324


0% tests passed, 1 tests failed out of 1

Label Time Summary:
regression    =  38.90 sec*proc (1 test)

Total Test time (real) =   9.76 sec

The following tests FAILED:
        7 - abl_godunov (Failed)
Errors while running CTest

During testing, fcompare will calculate the differences for each field in the plot file against gold files. Currently any difference is flagged as an error and causes the test to fail as seen from the above example. The test can also fail if the grids don’t match (e.g., due to different regrid based on refinement criteria) or if certain fields are missing in the plot file.

Test file organization

The regression tests are organized in the directory amr-wind/tests/test_files and are arranged in directories corresponding to the name of the test. Each directory contains the input file <test-name.i> and any other files necessary to run the test case. The test definitions are added to amr-wind/tests/CTestList.cmake.

Creating new regression tests

New tests can be added using the following steps:

  • Create a new directory with the desired test name within the tests directory.

  • Add appropriate input files necessary to run the tests

  • Add a new entry into CTestList.cmake

  • Rerun CMake configure to allow CMake to detect the new tests

  • Build, test, refine the feature

  • Commit the new test directory along with relevant source code updates to Git

For example, to create a new test called abl_godnov (ABL simulation using Godunov numerical scheme). The entry in the test file is shown below

add_test_re(abl_godunov 4)

The second argument to add_test_re indicates the number of parallel processes used to run the test. Currently it is recommended that the tests be run using 4 MPI ranks.

Test outputs and troubleshooting

During development, it is likely that some tests fail and it is necessary to examine the outputs and plot files for troubleshooting. CTest stores all outputs in the same directory structure as the test file but within the build directory. For example, if the build directory is build-test then the outputs for the test abl_godunov will be stored in the directory build-test/test/test_files/abl_godunov/. At least two outputs are always generated: a log file (e.g., abl_godunov.log) that contains the output usually printed to the console during amr_wind execution, and plot file output.