wiki:debugging_tips

Introduction

The following is a collection of tips for debugging various types of unexpected behavior when building and/or running ISSM.

NOTE: Some of this information may overlap with the public-facing ISSM Web site, and in some cases it may be best to simply host it there.

Valgrind

Information on using Valgrind can be found on the ISSM Web site "Debugging" page.

Jenkins Jobs

When debugging a given Jenkins build, one way to eliminate variables is by feeding its corresponding configuration file into the Jenkins driver script on a local machine,

cd $ISSM_DIR
./jenkins/jenkins.sh ./jenkins/<configuration_file>

as a bonus, you do not have to remember to run source $ISSM_DIR/etc/environment.sh after each external package installation. However, you might find yourself frequently removing more than one external package in the process of finding a combination that works well together. You can do so manually each time, but this is a waste of time. Instead, run,

for i in {<pkg1>,<pkg2>,...,<pkgn>}; do rm -rf $ISSM_DIR/externalpackages/$i/install; done

where <pkg*> is the name of an external package that you wish to remove.

BinRead.py

Sometimes you may observe results from ISSM that exceed a given tolerance, differ depending on operating system and/or configuration, or seem to be incorrect altogether. It may be helpful in such cases to use a script designed for marshaling human-readable model settings from binary ISSM input files.

Running Tests

Individual regression tests (designed for our Jenkins testing suite, but available to all users through the SVN repo) and the drivers for them are located at test/NightlyRun. With this as the working directory, tests can be run from within MATLAB with,

runme('id',<test_num>)

Python tests can be run from a Unixy command line by setting first setting the environment with,

export PYTHONPATH="$ISSM_DIR/src/m/dev"
export PYTHONSTARTUP="${PYTHONPATH}/devpath.py"
export PYTHONUNBUFFERED=1

then calling the Python test driver with options,

./runme.py -i <test_no>

Various input and output files are generated for such runs, and are located in execution/test<test_num>-<date>-<time>-<pid>/.

NOTE: It may be difficult to sort out which subdirectory of execution/ corresponds to a given test run. As such, if you are doing multiple runs of the same test with the intention of comparing the results, it is recommended that you make note of the name of the new subdirectory.

Marshalling Binary Input Files

Now that we have run one or more tests, we can inspect contents of the binary input file by running,

$ISSM_DIR/scripts/BinRead.py -f <bin_file> [-o <output_file>]

If we wish to compare the input files for, say, MATLAB versus Python runs of the same test, we can redirect the output of BinRead.py to a text file, then run,

diff <output_file> <output_file>

Comparing Queue Scripts

It is worth noting here that it might also be helpful to compare the queue scripts for MATLAB vs Python runs of the same test, with,

diff execution/test-<MATLAB_test>/test<test_num>.queue execution/test-<Python_test>/test<test_num>.queue

Dakota

Testing Installation

A number of built-in tests can be run to validate the functionality of your Dakota installation.

cd <path_to_dakota_installation>/share/dakota/test
./dakota_test.perl [options] [filename(s)] [test_number]

Note that extended options for the above test script are not well-documented by Sandia. Said options can be exposed by running,

./dakota_test.perl --help

In particular, it is useful to run,

./dakota_test.perl --parallel

to validate that MPI has been properly linked to the Dakota binaries.

Sources

Last modified 5 years ago Last modified on 01/14/20 13:51:10
Note: See TracWiki for help on using the wiki.