Introduction
ISSM relies on regression tests to check that no bug is introduced during code development. All the tests are located in $ISSM_DIR/test/test/NightlyRun
and all the tests whose number are less than 1000 are run after each change pushed to the svn repository (through jenkins: https://ross.ics.uci.edu/jenkins/). See jenkins for more information.
If a new capability is being added to ISSM, it is *critical* that a new test is created.
These nightly runs are not meant to be realistic. They rely on very coarse meshes. We only want to check that the numerics has not changed, the results are obviously meaningless. Each test should take less than 10 seconds to run (ideally 1 second).
Running a test
Tests are organized as follows:
- 1-999: "Nightly" series:
- 100s: Square ice shelf constrained (no ice front)
- 200s: Square ice shelf with ice front
- 300s: Square grounded ice constrained (no ice front)
- 400s: Square grounded ice with ice front (some floating ice)
- 500s: Pine Island Glacier (West Antarctica)
- 600s: 79 North (Greenland)
- 700s: flow band models
- 800s: valley glaciers
- 1000-1999 "validation" series:
- 1000s: ISMIP
- 1200s: EISMINT
- 1300s: Thermal validation
- 1400s: Mesh
- 1500s: Transient forcings
- 1600s: Thermal validation
- 2000-2999: "sea level" series
- 3000-3999: "ADOLC" series
Go to test/NightlyRun
. To run a test without checking whether it worked or not, just type: testxxx
in MATLAB, where xxx
is the test number. To check whether the results of that test are still consistent with the archive:
>> runme('id',XXX)
For tests that are >1000, you will need to specify that you want to use all the benchmarks:
>> runme('id',XXX,'benchmark','all')
If you would like to test a specific feature, you can use the function IdFromString
. For example, if you would like to run all the tests that involve dakota:
>> runme('id',IdFromString('Dakota'))
You should see a list of SUCCESS
. In very rare cases, you get an ERROR
that requires to update the tolerance. Check with developers before changing the tolerance though.
Adding a new test
Files to change/create
You must create a new test file in test/NightlyRun
. Make sure to be consistent with the series above.
Create a testXXX.m
file with the simulation test. The very first line should be a comment with the test name (which should be unique):
%Test Name: TEST_NAME_HERE
For example test328.m:
%Test Name: SquareSheetConstrainedSmbGradients2d
at the end of the file make sure to include field_names
, field_tolerances
and field_values
the names (no space) of the fields you want to check for your test with the tolerances associated (because every computer is different).
Run the following command to create the archive (the archive gives the reference values for all the fields that are checked),
>> runme('id',XXX,'procedure','update')
NOTE: The default benchmark used by the test driver is 'nightly', which includes only tests in the range 1-999. If you are manually running a test outside of this range, make sure to pass in the appropriate benchmark option.
This archive is located in $ISSM_DIR/test/Archives
in binary format (.arch), and is the same for the Python test if you create one as well. Now if you run
>> runme('id',XXX)
you should see SUCCESS everywhere. If not, you have to adjust the tolerances slightly.
Note: Do the same thing for testXXX.py
without creating the archive this time.
Checking in the new test
In the case of a brand new test
svn add test/NightlyRun/testXXX.m test/NightlyRun/testXXX.py test/Archives/ArchiveXXX.arch
then,
svn commit –m "NEW: Added test to check ...”`
with an appropriate message. Otherwise you can simply update the archive and commit the changes.