OpenMS
How To Write Tests

Testing is crucial to verify the correctness of the library - especially when using C++. But it is complicated. One of the biggest problems when building large class frameworks is portability. There is not a single C++ compiler that accepts the same code as any other compiler. Since one of the main concerns of OpenMS is portability, we have to ensure that every single line of code compiles on all platforms. Due to the long compilation times and the (hopefully in future) large number of different platforms, tests to verify the correct behaviour of all classes have to be carried out automatically. This implies a well defined interface for all tests, which is the reason for all these strange macros. This fixed format also enforces the writing of complete class tests.

Writing tests for each method of a class also ensures that each line is compiled. When using class templates the compiler only compiles the methods called. Therefore, it is possible that a code segment contains syntactical errors but the compiler accepts the code and ignores most of the code. This is quickly discovered in a complete test of all methods. The same is true for configuration dependent preprocessor directives that stem from platform dependencies. Often untested code also hides inside the const version of a method, when there is a non-const method with the same name and arguments (for example most of the getName) methods in OpenMS. In most cases, the non-const version is preferred by the compiler and it is usually not clear which version is taken. Again, explicit testing of each single method provides help for this problem. The ideal method to tackle the problem of untested code is the complete coverage analysis of a class. Unfortunately, this is only supported for very few compilers, so it is not used for testing OpenMS.

Writing the test program is an opportunity to verify and complete the documentation. Often, implementation details are not clear when the documentation is written. A lot of side effects or special cases that were added later do not appear in the documentation. Going through the documentation and the implementation in parallel is the best way to verify the documentation for consistence and the best way to implement a test program.

There are two types of tests that can be written: TOPP tests and class or unit tests.

TOPP tests

Each TOPP tool and each UTIL should have at least one test that accompanies it. These tests are added to src/tests/topp/CMakeLists.txt and should test whether the program can produce a specific output given input data and parameters.

Add commands to src/tests/topp/CMakeLists.txt (where it fits alphabetically). You can add a new (named) test with the add_test() command and set dependencies with the set_test_properites command if several files are needed but get created by different commands.

View the guidelines for adding your own tool to the TOPP suite <a href"https://openms.readthedocs.io/en/latest/docs/topp/adding-new-tool-to-topp.html#how-do-I-add-a-new-TOPP-test">here.

Class or unit tests

Each OpenMS class has to provide a test program. This test program has to check each method of the class. The test programs reside in the directory src/tests/class_tests/<lib> are usually named <classname>_test.cpp. The test program has to be coded using the class test macros as described in the OpenMS online reference. Special care should be taken to cover all special cases (e.g. what happens, if a method is called with empty strings, negative values, zero, null pointers etc.).

Supplementary files

If a test needs supplementary files, put these files in the src/tests/class_tests/libs/data folder. The name of supplementary files has to begin with the name of the tested class.

Macros to start, finish and evaluate tests

START_TEST(class_name, version)
Start of a class test file (initialization)
End of a class test file (cleanup)
Start of a method test. If the name starts with '[EXTRA]' it does not have to match a methods name.
End of a single test
STATUS(message)
Shows a status message e.g. used to show the progress of a test preparations that take a while
ABORT_IF(condition)
Skip remainder of substest if condition holds
Comparison macros
Tests if two expressions are equal
Tests if two expressions are not equal
Tests if two real numbers are equal (within a margin)
Tests if a and b are equal as strings
Tests if a and b are similar as strings - allowing numerical deviations and differing whitespaces
Sets the absolute difference allowed when testing floating point numbers
Sets the relative difference allowed when testing floating point numbers
Tests if the expression throws the exception
Tests if the expression throws the exception and if the exception has the message
TEST_FILE_EQUAL(file, template_file)
Tests if two files are identical
TEST_FILE_SIMILAR(file, template_file)
Tests if two files are similar - allowing numerical deviations and differing whitespaces
#define END_SECTION
End of a subtest.
Definition: ClassTest.h:560
#define TEST_EXCEPTION_WITH_MESSAGE(exception_type, command, message)
Exception test macro (with test for exception message).
Definition: ClassTest.h:936
#define STATUS(message)
Print a status message.
Definition: ClassTest.h:1079
#define TEST_EQUAL(a, b)
Generic equality macro.
Definition: ClassTest.h:592
#define TEST_EXCEPTION(exception_type, command)
Exception test macro.
Definition: ClassTest.h:814
#define TEST_STRING_EQUAL(a, b)
String equality macro.
Definition: ClassTest.h:642
#define TOLERANCE_RELATIVE(a)
Define the relative tolerance for floating point comparisons.
Definition: ClassTest.h:760
#define TEST_STRING_SIMILAR(a, b)
String similarity macro.
Definition: ClassTest.h:695
#define TEST_FILE_SIMILAR(a, b)
File similarity macro.
Definition: ClassTest.h:711
#define ABORT_IF(condition)
Skip the remainder of the current subtest.
Definition: ClassTest.h:1047
#define START_TEST(class_name, version)
Begin of the test program for a given class.
Definition: ClassTest.h:474
#define TEST_FILE_EQUAL(filename, templatename)
File comparison macro.
Definition: ClassTest.h:658
#define TEST_REAL_SIMILAR(a, b)
Floating point similarity macro.
Definition: ClassTest.h:678
#define TOLERANCE_ABSOLUTE(a)
Define the absolute tolerance for floating point comparisons.
Definition: ClassTest.h:783
#define START_SECTION(name_of_test)
Begin of a subtest with a given name.
Definition: ClassTest.h:523
#define END_TEST
End of the test program for a class.
Definition: ClassTest.h:491
#define TEST_NOT_EQUAL(a, b)
Generic inequality macro.
Definition: ClassTest.h:628
bool test
Status of the current subsection.
int exception
(Used by various macros. Indicates a rough category of the exception being caught....
static bool has(const String &this_s, Byte byte)
Definition: StringUtilsSimple.h:142

Do not use methods with side-effects inside the comparison macros i.e. *(it++). The expressions in the macro are called several times, so the side-effect is triggered several times as well.

Temporary files

You might want to create temporary files during the tests. The following macro puts a temporary filename into the string argument. The file is automatically deleted after the test.

All temporary files are validated using the XML schema, if the type of file can be determined by FileHandler. Therefore, for each file written in a test, NEW_TMP_FILE should be called. Otherwise, only the last written file is checked.

NEW_TMP_FILE(string)
#define NEW_TMP_FILE(filename)
Create a temporary filename.
Definition: ClassTest.h:1026

Creating barebone tests

There are also some PHP tools for testing other tasks in the tools/ directory. See tools/README for details. These will create barebone test files that will need to be fleshed out with specifics of your implementation details.

Building tests

In order to build the tests, execute build the all target. Depending on the build system you generated this target can have different names, e.g., make all for Makefiles and ALL_BUILD for Visual Studio. This will build the TOPP tools, UTILS and all unit tests. Building the TOPP tools alone is not sufficient (you need FuzzyDiff - a UTIL to run the tests).

Running tests

OpenMS uses CTest to run its tests. You can invoke the ctest executable in the OpenMS binary directory and it will run all tests (including TOPP tests). To run a specific test use the ctest -R <testname>, e.g. ctest -R TOPP_FileMerger to run all FileMerger tests. You can add -V or -VV to ctest to make the output more verbose. For Visual Studio and Xcode, provide the configuration to test, e.g., ctest -R TOPP_FileMerger -C. Release to execute all FileMerger tests using the Release build.

Numerical inaccuracy

The TOPP tests will be run on 32 bit and 64 bit platforms. Therefore, a character-based comparison of computed and expected result files might fail although the results are in fact numerically correct - think of cases like 9.999e+3 vs. 1.0001e+4. Instead, a small program FuzzyDiff as a UTIL is provided. This program steps through both inputs simultaneously and classifies each position into 3 categories: numbers, characters, whitespace. Within each line of input, numbers are compared with respect to their ratio (i.e., relative error), characters must match exactly (e.g. case is significant) and all whitespace are considered equal. Empty lines or lines containing only whitespace are skipped, but extra line breaks 'within' lines will result in error messages. You can also define a "whitelist" of terms, which makes FuzzyDiff ignore lines where these terms occur (useful for hardcoded file paths etc). For more details and verbosity options, see the built-in help message and the source code.

The data files should be as small as possible, but not totally trivial.