A unit testing framework, allowing tests to be defined as member functions in test group classes. The test runner contains four different execution models, allowing single threaded and concurrent runs, in and out of process.
Unlike many other C++ unit test frameworks, the Balau unit test framework does not use preprocessor macros. Instead, tests are defined as parameterless instance methods of test group classes, and assertions are Hamcrest inspired template functions. A complete test class forms a test group in the resulting run. Test classes linked into the test application are automatically instantiated and register themselves with the test framework.
The Balau unit test framework does not use any external code generation tool to simulate the effects of the Java annotations in Java based unit test frameworks such as JUnit and TestNG, from which many C++ unit test frameworks are inspired. Instead, the test methods of a class are simply added to the class' test run by specifying them within the constructor.
Test classes typically mirror the production code classes. Using a friend class declaration in a production class allows the test class' methods to test private functions if this is required.
The Balau test runner has four execution models. The models are:
The execution model can be selected by passing the relevant execution model enum value to the test runner initialisation method from the test application main function.
All tests are run by default. Selective running of tests is achieved by passing test group / test case names to the test runner initialisation method, via the argc and argv parameters in the test application's main function. Simple globbing can also be used to specify multiple tests to run.
#include <Balau/Testing/TestRunner.hpp>
Tests are defined within test groups. Each test group is defined as a class.
Test group classes inherit from the Testing::TestGroup test runner base template class, using CRTP. Each test case is an instance method of the test class, which takes zero parameters and returns void. The constructor body of a test class registers test methods via calls to the registerTest method.
// Example test group from the Balau unit tests. struct ObjectTrieTest : public Testing::TestGroup<ObjectTrieTest> { ObjectTrieTest() { registerTest(&ObjectTrieTest::uIntTrieBuild, "uIntTrieBuild"); registerTest(&ObjectTrieTest::uIntTrieCopy, "uIntTrieCopy"); registerTest(&ObjectTrieTest::uIntTreeDepthIterate, "uIntTreeDepthIterate"); registerTest(&ObjectTrieTest::uIntTreeDepthIterateForLoop, "uIntTreeDepthIterateForLoop"); registerTest(&ObjectTrieTest::uIntTreeBreadthIterate, "uIntTreeBreadthIterate"); registerTest(&ObjectTrieTest::fluentBuild, "fluentBuild"); } void uIntTrieBuild(); void uIntTrieCopy(); void uIntTreeDepthIterate(); void uIntTreeDepthIterateForLoop(); void uIntTreeBreadthIterate(); void fluentBuild(); };
The test application executes the test runner by calling the test runner's run method. Within the main function, the test runner is initialised via one of the runner's initialisation methods. The most common run methods to use are the ones that take argc and argv arguments. The argc and argv arguments of the test application's main function are passed to the test runner's run method, in order to parse the command line arguments.
#include <Balau/Testing/TestRunner.hpp> using namespace Balau::Testing; int main(int argc, char * argv[]) { return TestRunner::run(argc, argv); }
The test runner uses the command line parser to parse a space separated command line which can also contain globbed test name pattners to run.
The following command line options are available.
Short option | Long option | Has value | Description |
---|---|---|---|
-e | --execution-model | yes | The execution model (default = SingleThreaded). |
-n | --namespaces | no | Use namespaces in test group names (default is not to use). |
-p | --pause | no | Pause at exit (default is not to pause). |
-c | --concurrency | yes | The number of threads or processes to use to run the tests (default = detect). |
-r | --report-folder | yes | Generate test reports in the specified folder. |
-h | --help | no | Displays this help message. |
The test runner will interpret the first element of argv as the execution model (case insensitive) if it is a valid execution model, and the remainder of the command line arguments as a space/comma delimited list of globbed test names to run. If the first element of argv is not a valid execution model, it will form the head of the globbed test name list and the SingleThreaded execution model will be used by default.
In both cases, the zeroth element of argv is assumed to be the test application path and is ignored. If this is not the case, then the starting element can be specified as an optional argument of the run call.
Selective running of test cases is achieved by providing a space/comma delimited list of globbed test names to the test runner's run method. If no list is provided, all test cases are run.
There are two globbing patterns available:
Glob character | Meaning |
---|---|
* | Match zero or more characters. |
? | Match exactly a single character. |
Multiple patterns can be specified on the command line, either with a single command line argument containing a comma delimited list, or via multiple command line arguments representing a space delimited list.
# Run the Balau test application with the worker processes execution model # and specifying a subset of tests via a comma delimited list of patterns. BalauTests -e WorkerProcesses Injector::*,Environment::* # Run the Balau test application with the worker processes execution model # and specifying a subset of tests via a space delimited list of patterns. BalauTests -e WorkerProcesses Injector::* Environment::*
The test runner has four execution models. The models are:
For the multi-threaded and worker process execution models, the concurrency level (the number of threads or processes) can be optionally specified as an argument to the test runner constructor. The concurrency level is also used to specify the number of simultaneous processes to spawn in the process per test execution model.
If the concurrency level is not specified, the default value equal to the number of CPU cores is used.
Tests are grouped inside test classes.
Test classes derive from the Testing::TestGroup base class template, using CRTP. Each test is an instance method of the test class, which takes zero parameters and returns void. The constructor body of a test class registers test methods via calls to the registerTest method.
The following is the header of a test class which has four test methods.
struct CommandLineTest : public Testing::TestGroup<CommandLineTest> { CommandLineTest() { registerTest(&CommandLineTest::basicTest, "basicTest"); registerTest(&CommandLineTest::failingTest, "failingTest"); registerTest(&CommandLineTest::finalValueTest, "finalValueTest"); registerTest(&CommandLineTest::numericValueTest, "numericValueTest"); } void basicTest(); void failingTest(); void finalValueTest(); void numericValueTest(); };
The result of running the above test class follows. One of the tests is failing.
------------------------- STARTING TESTS ------------------------- Run type = single process, multi-threaded (2 threads) ++ Running test group CommandLineTest - Running test CommandLineTest::basicTest - passed. Duration = 174μs - Running test CommandLineTest::failingTest - FAILED! Assertion failed: true != false Duration = 140us - Running test CommandLineTest::finalValueTest - passed. Duration = 147μs - Running test CommandLineTest::numericValueTest - passed. Duration = 354μs == CommandLineTest group completed. Group duration (core clock time) = 675μs ------------------------- COMPLETED TESTS ------------------------ Total duration (test run clock time) = 675μs Average duration (test run clock time) = 225μs Total duration (application clock time) = 696μs THERE WERE TEST FAILURES. Total tests run: 4 3 tests passed 1 test failed Failed tests: CommandLineTest::failingTest Test application process with pid 13090 finished execution. Process finished with exit code 0
Test classes may include setup and teardown methods. Due to the multi-process design of the test runner, only test setup/teardown methods are supported (i.e. there are no class setup/teardown methods included).
The following is a test class which has test setup and teardown methods defined:
class CommandLineTest : public Testing::TestGroup<CommandLineTest> { public: CommandLineTest() { registerTest(&CommandLineTest::basicTest, "basicTest"); registerTest(&CommandLineTest::finalValueTest, "finalValueTest"); registerTest(&CommandLineTest::numericValueTest, "numericValueTest"); } void basicTest(); void finalValueTest(); void numericValueTest(); private: void setup() override { log(" CommandLineTest::setup() called.\n"); } private: void teardown() override { log(" CommandLineTest::teardown() called.\n"); } };
The result of running the above test class follows.
------------------------- STARTING TESTS ------------------------- Run type = single process, single threaded ++ Running test group CommandLineTest - Running test CommandLineTest::basicTest - passed. Duration = 65μs CommandLineTest::setup() called. CommandLineTest::teardown() called. - Running test CommandLineTest::finalValueTest - passed. Duration = 47μs CommandLineTest::setup() called. CommandLineTest::teardown() called. - Running test CommandLineTest::numericValueTest - passed. Duration = 191μs == CommandLineTest group completed. Group duration (core clock time) = 303μs ------------------------- COMPLETED TESTS ------------------------ Total duration (test run clock time) = 303μs Average duration (test run clock time) = 101μs Total duration (application clock time) = 393μs ALL TESTS PASSED: 3 tests executed Test application process with pid 13431 finished execution. Process finished with exit code 0
The test runner contains Hamcrest inspired assertion utilities that are available for use in test methods and setup/teardown methods.
In order to use the assertions, suitable operator == functions/methods will need to exist for the types of the objects specified in the assertion statements. In addition, each type will require a toString function to be defined. These are used by the assertion render functions. Refer to the documentation on the universal to-string function for more information.
In order for the compiler to pick up the correct toString and operator == functions, the header file(s) containing the functions must be included before the TestRunner.hpp header is included.
When writing tests in a .cpp file, it is convenient to import the assertion function symbols via a using directive:
// Import the assertion functions. using namespace Balau::Testing;
Examples of assertions can be found in the AssertionsTestData.hpp test file.
There are two ways to use the assertion functions. The first is directly:
Balau::Testing::assertThat(actual, is(expected));
When the assertion fails, the assertion will log an error by calling toString on each of the arguments and then throw a Balau::Exception::AssertionException.
The alternative and recommended way of using the assertion functions is via the AssertThat macro. This macro also performs the assertion via Balau::Testing::assertThat, but in addition to the assertion, the __FILE__ and __LINE__ macros are used in order to supply the source code location to the assertion function for logging.
AssertThat(actual, is(expected));
As the AssertThat token is a macro, it should not be prefixed by a namespace.
Assertions for equality and other standard comparisons are available:
AssertThat(actual, is(expected)); AssertThat(actual, isNot(expected)); AssertThat(actual, isGreaterThan(expected)); AssertThat(actual, isLessThan(expected)); AssertThat(actual, isGreaterThanOrEqual(expected)); AssertThat(actual, isLessThanOrEqual(expected)); AssertThat(actual, isAlmostEqual(expected, errorLimit));
These comparison assertions require that the implicated types have corresponding comparison functions defined. For the isAlmostEqual assertion, both <= and >= are required.
Other types of comparison such as startsWith, endsWith, etc. are also available:
AssertThat(actual, startsWith(expected)); AssertThat(actual, endsWith(expected)); AssertThat(actual, contains(expected)); AssertThat(actual, doesNotContain(expected));
However, these assertions require that the type implicated in the call have a std::basic_string type API (length, substr, begin, end) and require the Balau::contains(actual, expected) function to be defined for the actual and expected types, so unless you define such an API and helper function for your types, their use is limited to std::basic_string<T>.
Assertions for expected thrown exceptions are available with a similar Hamcrest like API:
AssertThat(function, throws<T>()); AssertThat(function, throws(expectedException)); AssertThat(function, throws(expectedException, comparisonFunction));
The usage of these assertions differs from the comparison assertions. The first argument passed to the assertThat call is a function, typically supplied as a lambda:
AssertThat([&] () { foo(); }, throws<T>()); AssertThat([&] () { foo(); }, throws(expectedException)); AssertThat([&] () { foo(); }, throws(expectedException, comparisonFunction));
The function is called during the assertion call, and any exception thrown is then examined. The first call verifies that the expected type of exception is thrown. The second and third calls examine the contents of the thrown exception, compared to the supplied exception. In order to use the second call, the exception type must have an equality operator function defined for it in order for the code to compile. The third call allows a comparison function to be passed to the call, which will be used instead of the equality operator.
In order to use the exception instance assertion versions, a suitable operator == function must be defined for the exception, in order that the test framework may compare the actual and expected exceptions. No such function is required in order to use the exception type assertion version. A suitable toString function will also be required for the exception class, in order for the test runner to print the exception contents during assertion failures.
Examples of the use of the first and third exception assertion calls can be seen in the CommandLineTest class:
// Exception type assertion. AssertThat([&] () { commandLine.getOption(KEY9); }, throws<OptionValueException>()); // Exception contents assertion. auto comp = [] (auto & a, auto & e) { return std::string(a.what()) == std::string(e.what()); }; AssertThat([&] () { commandLine.getOption(KEY9); }, throws(OptionValueException("key9"), comp));
The assertion methods call the following function to print the content of the actual and expected values in the case of an assertion failure:
namespace Balau { namespace Renderers { template <typename A, typename E> std::string render(const A & actual, const E & expected); } // namespace Renderers } // namespace Balau
The standard renderer calls the Balau toString functions for the inputs and prints each resulting line side by side, along with an infix == or != according to line equality.
Custom failure message renderers may be added by specialising the above template function (within the Balau namespace):
The test runner has configurable test logging, allowing the test log to be written to a variety of outputs. This is achieved by passing one or more TestWriter derived classes to the test runner's run method. The following test writers are defined in the TestRunner.hpp header:
Class name | Description |
---|---|
StdOutTestWriter | Write to stdout. |
FileTestWriter | Write to the specified file. |
OStreamTestWriter | Write to the previously constructed output stream. |
LogWriter | Write to the specified Balau logger. |
Other test writers may be created if required, by deriving from the TestWriter base class.
In order to register test writers, they are specified as arguments to the test runner's run method:
int main(int argc, char * argv[]) { // Run the test runner with two writers. TestRunner::run( argc, argv, 1, false, false , LogWriter("balau.test.output") , FileTestWriter(Resource::File("testOutput.log")) ); }
If no test writers are specified, the test runner will log to stdout by default.
The TestGroup base class contains two logging methods that can be used to log output to the test runner writers:
/// /// Write additional logging to the test writers. /// protected: void log(const std::string & string); /// /// Write additional logging to the test writers. /// A line break is written after the string. /// protected: void logLine(const std::string & string = "");
These methods can be used anywhere in a test class to log additional test messages.
An alternative technique of logging test messages is to use the Balau logging framework and construct the test runner to log test results to a Balau logger. This allows the full parameter parsing of the logging framework to be used within the test class log messages. There are two ways of configuring the logging system for the test application.
More information on logging system configuration is available in the Logging documentation.
In addition to test result logging, the test runner can be configured to generate XML based test run report files.
The default reporter provides Maven Surefire plugin schema based XML reports. Alternative report generators may be defined by deriving from the TestReportGenerator base class and providing an instance of the reporter class to the test runner run function.
The test framework utilities are designed to provide domain specific help in accomplishing certain management tasks required during testing. The test framework utilities are in an early stage of development, and currently only a pair of network related test utility functions are defined.
The test framework network utilities provide a way of getting a free TCP ports for tests. There are two functions defined:
These two functions are normally used together. The initialiseWithFreeTcpPort function takes some code to execute. This code is part of the test and should initialise the network state that requires the port.
From within the code supplied to the initialiseWithFreeTcpPort function, the getFreeTcpPort function should be called in order to obtain a free port that will be used to initialise the network state.
There are two possible failures that may occur when attempting to obtain a free port. The first is that the specified port is not available. This issue is mitigated with the call to getFreeTcpPort. This function takes start port and port count arguments, and tests for the availability of a free port between the specified port range.
Once a free port has been obtained, an attempt to bind to it can be made by the test code. However, there is an inherent race condition present. If another process binds to the port between the call to getFreeTcpPort and the subsequent attempt to bind, the binding will fail.
In order to mitigate this race condition, the initialiseWithFreeTcpPort function will repeatedly run the initialisation code until a successful binding has been achieved. Each time the initialisation code is run, a new free port is obtained via the getFreeTcpPort call.
Examples of this pattern being used can be seen in the HTTP network tests in the Balau test suite. The following is an extract from one of the tests in the FileServingHttpWebAppTest class.
const unsigned short port = Testing::NetworkTesting::initialiseWithFreeTcpPort( [&server, documentRoot, testPortStart] () { auto endpoint = makeEndpoint( "127.0.0.1", Testing::NetworkTesting::getFreeTcpPort(testPortStart, 50) ); auto clock = std::shared_ptr<System::Clock>(new System::SystemClock()); server = std::make_shared<HttpServer>( clock, "BalauTest", endpoint, "FileHandler", 4, documentRoot ); server->startAsync(); return server->getPort(); } );
In the above code, the race condition occurs between the call to getFreeTcpPort and the call to construct the HTTP server.
All test group classes that are linked into the test application are automatically instantiated and registered with the test runner. The main function of the test application should thus only call one of the run methods of the test runner.
Example main function:
#include <Balau/Testing/TestRunner.hpp> using namespace Balau::Testing; int main(int argc, char * argv[]) { // Run the tests with the execution model specified as the first element of argv. return TestRunner::run(argc, argv); }
Selective running of test cases is achieved by providing a space/comma delimited list of globbed test names to the test runner's run method. If no list is provided, all test cases are run.
There are two globbing patterns available:
Glob character | Meaning |
---|---|
* | Match zero or more characters. |
? | Match exactly a single character. |
Multiple patterns can be specified on the command line, either with a single command line argument containing a comma delimited list, or via multiple command line arguments representing a space delimited list.
# Run the Balau test application with the worker processes execution model # and specifying a subset of tests via a comma delimited list of patterns. BalauTests -e WorkerProcesses Injector*,Environment* # Run the Balau test application with the worker processes execution model # and specifying a subset of tests via a space delimited list of patterns. BalauTests -e WorkerProcesses Injector* Environment*
TODO update documentation to reflect command line parsing
The test execution model to run can be specified either as the first argument of the command line by calling the TestRunner::run(argc, argv) method, or explicitly by using other run method overloads, most commonly the TestRunner::run(model, argc, argv) overload.
#include <Balau/Testing/TestRunner.hpp> using namespace Balau::Testing; int main(int argc, char * argv[]) { // Run the tests with the worker processes execution model. return TestRunner::run(WorkerProcesses, argc, argv); }
The TestRunner::run(argc, argv) approach can be useful for teams that run a continuous integration server that requires a WorkerProcesses or ProcessPerTest execution model, whilst allowing developers to set a SingleThreaded or WorkerThreads execution model in their run configurations.
If the TestRunner::run(argc, argv) method is used and no command line arguments are supplied, the default SingleThreaded execution model is used.
The test runner has four execution models. The models are:
Each execution model has advantages and disadvantages. Typically, the single process execution models would be used whilst running tests on the developer's workstation, and one of the multiple process execution models would be used when running on a continuous integration server.
The single threaded execution model executes each test in turn, within the test application main thread. This is the simplest of the execution models. If a test causes a segmentation fault, the test application will terminate.
Reasons to use the single threaded execution model include:
The disadvantages of the single threaded execution model are:
The multi-threaded execution model executes tests in parallel, in a fixed number of worker threads in the test application. Each worker thread executes by claiming the next available test and running it. As this execution model is also single process, the test application will terminate if a test causes a segmentation fault.
Reasons to use the multi-threaded execution model include:
The disadvantage of the multi-threaded execution model is that if a test causes a segmentation fault, the test application will terminate.
The worker process execution model executes tests in parallel, in a fixed number of child processes spawned by the test application. Each worker process executes by claiming the next available test and running it.
As this execution model is multiple process, the test application will not terminate if a test causes a segmentation fault. Instead, the child process will terminate and the parent process will spawn a replacement child process to continue testing.
Reasons to use the worker process execution model include:
The disadvantage of the worker process execution model is that breakpoints will not be hit in the child processes.
The process per test execution model executes tests in parallel, by forking a new child process for each test.
As this execution model is multiple process, the test application will not terminate if a test causes a segmentation fault. Instead, only the child process for the test causing the segmentation fault will terminate.
Reasons to use the process per test execution model include:
The disadvantages of the process per test execution model are:
The following test run timing information was obtained by running the Balau unit tests (2018.9.1 release) for each execution model. The CPU used in the test was an Intel i7-8550U (4 core with hyper-threading), with turbo turned off. The default concurrency of 8 threads/processes was used for the multi-threaded, worker process, and process per test execution models.
The best result of 10 runs was taken for each execution model. The timing values indicate clock time, thus they do not take into account context switches where a CPU core is executing other application's background code. The total duration (test run clock time) is the sum of all the test's execution times. For concurrent execution models, this is spread across the allocated cores. The average duration is the test run clock time total duration divided by the number of tests run. The total duration (application clock time) is the duration of the test application's main process.
In the Balau test suite, the logger tests are automatically disabled during multi-threaded and worker process runs. Consequently, these test groups were manually disabled for all the the runs in this performance evaluation. In addition, all test groups that involve network functionality were also disabled in order to avoid network latency issues skewing results.
----------------- SINGLE THREADED ------------------ Total duration (test run clock time) = 1.1s Average duration (test run clock time) = 4.2ms Total duration (application clock time) = 1.1s ***** ALL TESTS PASSED - 262 tests executed ***** ------------------ MULTI-THREADED ------------------ Total duration (test run clock time) = 1.5s Average duration (test run clock time) = 5.6ms Total duration (application clock time) = 345.8ms ***** ALL TESTS PASSED - 262 tests executed ***** ----------------- WORKER PROCESSES ----------------- Total duration (test run clock time) = 1.5s Average duration (test run clock time) = 5.8ms Total duration (application clock time) = 384.0ms ***** ALL TESTS PASSED - 262 tests executed ***** ----------------- PROCESS PER TEST ----------------- Total duration (test run clock time) = 1.2s Average duration (test run clock time) = 4.7ms Total duration (application clock time) = 382.7ms ***** ALL TESTS PASSED - 262 tests executed *****
It can be seen that the single threaded executor has the lower overhead per test, but the other executors nevertheless reduce the execution time significantly when run on a multi-core CPU. If an appreciable amount of tests with significant I/O waits were present, the speed up would approach the number of allocated cores.
The Balau unit test execution times range from microseconds to tens of milliseconds, and there were 262 tests run in the above timing runs. A complex C++ application could have one or two orders of magnitude more tests than this, and it is likely that the average execution time of the tests would be greater on average. Thus the overall execution times presented above could thus be multiplied by a factor of several orders of magnitude in order to represent a real world scenario.
This section only discusses coninuous integration via a CMake target.
In order to create CI jobs in tools such as Jenkins, the test application will require a CMake target to run. This can be achieved with the following configuration in the CMakeLists.txt file.
##################### TEST RUNNER ##################### add_custom_target( RunTests ALL WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/bin COMMAND TestApp ) add_dependencies(RunTests TestApp)
This custom target can also be used directly from the command line if required, by typing make RunTests.
In the CMake custom target definition, RunTests is the name of the target that will be specified in the CI jobs, and TestApp is the test application executable target. After adding the dependency declaration, running the RunTests target will first build the test application, then will execute it.
The test application will return 0 on success and 1 on failure. This will be picked up by the CI job runner in order to determine test run success.