Simple Test Runner

300 lines of Bash code. Take and use.

Born on a rainy Herfordshire weekend, the Simple Test Runner, a balanced fuse of simplicity and power, is an essence of more than a decade of experience with software development processes automation at world's leading EDA companies and a top tier investment bank.

The Runner will be an ideal base for testing automation in any Bash-enabled Linux environment. The code base of just about 300 lines is meant to direct but not limit the user. A single file tests.txt is both a list of tests and a configuration.

Software Quality Assurance (QA)

Quality Assurance is the process that the testing is a part of. A test is a single QA task with well defined success criteria (e.g. software build is a QA step validating that the software builds, and is thus the first step in any QA process). Test is only worth conducting if it positively impacts the QA process (the cost to run the test is less than the penalty from the potential loss if not running it). Once automated, the test well amortizes across the life time of the software, providing that automation process maintenance is robust enough and is not an overhead on its own.

Tests are typically subdivided by categories (unit, functional, regression, performance, etc.), but it really does not matter for automation. What does matter is the cost of automation versus that of manual testing. In order for test to be reliably automated it should be possible to consistently isolate the entity being tested (for input and environment) and have a reliable success criteria capable of producing the binary yes/no answer for the test. Simple Test Runner is written to treat software QA as such.

Abstraction Levels

The Simple Test Runner has three levels of testing abstraction and includes just enough support for all of them. These levels are (from abstract to specific):

  1. Abstract Test - run test command and check its exit code
  2. Material Test - run script that takes three parameters, test setup command, test execution command, and test result analyzer command, all three must succeed (0 exit code) in order for test to be counted as passed
  3. Specific Test - run script same way as above but use to analyze the output, the expects that test command produces files and needs to be provided the directory with "golden" files to compare test output with
Start with the simplest one that solves your immediate task, check the next one when need more.
Distribution Archive Contents

      +-code     # Bash Code, ~319 lines
      +-example  # is worth a thousand words
      +-LICENSE  # to guard ourselves from each other

Minimum Setup

  1. Put the directory code somewhere and treat it read-only
  2. Create a file tests.txt in the directory where you run the test or, preferrably, create it somewhere else and create a symlink to it in the directory where you're going to run tests
If you change to the directory having tests.txt you should be able to run the script If you run it as -n -g\* it will display all tests in the file tests.txt.

First Level - Abstract Test

Abstract test is only supposed to be a shell executable and thus be able to return exit status.

The file tests.txt can look as the one below:

# [group name] [test name]            [timeout, sec.] [command to run]
  run          trivial_pass_test      30              /bin/true
  run          trivial_fail_test      30              /bin/false
  run          trivial_timedout_test  3               /bin/sleep 5
The runs test's command line and fetches the exit code if the code is 0 the test is PASSED, otherwise FAILED
bobah@europa> -g run ;# -g: a group of tests to run, in this case "run" -I- run trivial_pass_test (pid=4813, pwd=run/trivial_pass_test/work) -I- trivial_pass_test - PASSED, 00:00:00 (0s) -I- run trivial_fail_test (pid=4821, pwd=run/trivial_fail_test/work) -I- trivial_fail_test - FAILED, rc=1, 00:00:01 (1s) -I- run trivial_timedout_test (pid=4831, pwd=run/trivial_timedout_test/work) -I- run trivial_timedout_test - TIMEOUT, terminated, 00:00:04 (4s) -I- total tests: 3, passed: 1, failed: 2

Second Level - Material Test

Material test is supposed to need pre-run preparation and produce analyzable results. For instance, preparation can be fetching the person's e-mail from address book, test execution can be sending a mail to the person and asking to reply, and test results analysis can be checking for the reply in the own mailbox.

In case your test fits in prepare-execute-analyze model you can use Simple Test Runner's script, which needs you to provide commands for three above described steps. Setup and Analyzer commands are defaulted to /bin/true, so if configured as below, it runs just as an Abstract Test, just testing the exit code of test executable.

# [group name] [test name]                [timeout, sec.] [command to run]
  run          abstract_as_material_pass  30              ${BASE_DIR}/ -x /bin/true

The below tests.txt demonstrates all possibilities of the Material Test model implemented in

run  material_pass           30  ${BASE_DIR}/ -x /bin/true
run  material_fail_setup     30  ${BASE_DIR}/ -s /bin/false -x /bin/true  -a /bin/true
run  material_fail_running   30  ${BASE_DIR}/ -s /bin/true  -x /bin/false -a /bin/true
run  material_fail_analyzis  30  ${BASE_DIR}/ -s /bin/true  -x /bin/true  -a /bin/false

Third Level - Specific Test

Specific test is supposed to produce something in the directory where it runs. The canonical file-based test output analysis implementation is provided by the script, which is also a part of the Simple Test Runner.

The script expects three parameters: diff command (defaults to "diff -q"), filter command (defaults to /bin/cat), and a directory with golden output to compare the current output with. The work done by analyzer for each file in the output is schematically described as below.

current/outputfile.ext | filter | outputfile.txt.current \
                                                          --> diff_cmd ? PASS/FAIL
golden/outputfile.ext | filter |     /

str_v0.2.2.tgz5.08 KB