company logo

Test suite and test case

A test suite covers different kind of resources to run tests and provide results in a proper way. Each test case belongs to a test suite that provides common resources for test cases described below the test suite. Usually, a test suite covers one or more test cases. Hence, test cases may extend or overwrite test suite resources by adding additional resources or replacing existing resources. Test suite and test case provide the following kinds of resources:

  • Tests - Implementation of one or more tests to be executed within the test suite
  • Test set - Input data required for running the test
  • Actions - Actions provided for running the test
  • Expected output - Expected output for the test
  • For each test run following resources are typically created:

  • Test output - Output data created by the test run
  • Reports - log and result files

Within a test suite, a number of test cases, which refer to the test set of the test suite, may be executed by calling appropriate actions. A test defines the sequence of action calls. Each test run creates test output. Test output will be compared with expected output and the result of comparison is written to a result file.

Ideally, a separate location (directory) is defined for each element type. Depending on kind of processing, other elements may be added to a test suite.

In order to mark a directory as suite, a file suite has to be created in the directory.

Test

A test is described as a sequence of actions to be executed. The execution of a test takes place within a test run for the test or a test suite. In the ODABA test framwork tests are defined as run actions for a test case or a test suite.

Each test execution within a test run returns either true or false. The values may be named differently (e.g. success or failed as in the example), but running a test always returns one of these result values. When a test has failed (false), it means that it did not match the expected test output. In case of testing an error situation, expected output may contain an error message or code and the test returns true, when the expected error has been produced. In order to provide more information for false tests, one may add additional information.

The result of each test is as to a result data for the test/test run. In addition, each test should provide a protocol (log file) when being executed (e.g. start and stop time for the test).

Test set

A test set provides the data for running all test cases defined in a test suite. Test data defined for the test suite must not be complete, i.e. it may be extended or replaced by each single test case. Test sets are mainly a mean for reducing the amount of test data, i.e. reducing costs for test data maintenance. Local test set for a test case or test suite is defined in the data directory.

The final test set for a test results is a combination of test sets of all test suites in the test suite hierarchy, while files in test sets on lower level in the hierarchy overwrite files in higher test suite levels.

Before running a test, test resources are copied to a test work area, where the test will be executed.

Actions within a test suite

In order to run tests within a test suite, several actions may be defined. It does not matter, how actions are implemented, but it should be as less and as simple as possible. There is a typical scenario, which is referred to in the example and which is quite sufficient for many tests. This scenario consists of following actions:

  • preprocessing - Run special (sub)actions for preparing test case data.
  • run - Run the required test functions.
  • postprocessing - Run special actions after running the test. Typically, this action compares test results with expected test results.

Actions are implemented in order to be executed within a single test. Actions do not manage test runs for one or more test cases. For this purpose, the test frame work provides appropriate techniques (command line procedures). The example provides a number of procedures (bash or cmd files) for managing test suites under Linux and MS Windows.

Since actions defined within the test suite may be overloaded, those are called dynamic actions. Actions or procedures for performing test framework tasks are called global actions.

Expected output

Expected output describes the expected result of a test being executed. This may be the value returned from a program or one or more output files created by the test. Typically, expected data will be compared with output data produced by running test. When the output and expected data are the same, the test returns true, otherwise false. Locally defined expected output is defined in the expected directory.

Often, this is not as simple, since comparing data may include removing disturbing parts in the test output (e.g. creation time stamp for an output file). This does not change principles used for comparing test output, but evaluation technologies, only.

Similar to test sets (test data), expected data is inherited from the test suite hierarchy, i.e. common expected test data may be provided on higher level test suites.

Test output

The location for storing test output in a common sense depends on the component or unit to be tested. Test output may be a return code, but also a collection of files or any other kind of electronic readable output. As soon as test output is not readable (e.g. a test may produce a beep), it becomes more difficult running automatic tests.

When running tests in a work area, output will be created in the work area. In order to save more detailed information for a test run, relevant part of output may be copied to a location identified by test run and test or test suite identifier.

Results

Results are created by test runs and are stored on test run/test level. Usually, there is a result file containing the result (true or false) for each test executed. Moreover, a summary file may be provided for the test run containing e.g. start and stop time for each test, duration or other relevant information.

Result and log file content refers to common test run information not depending on component or component properties.