01.01 - Proceeding a Test Case / a Test Campagne


Just as manual tests, automated tests generally begin with a step of setting up  pre-requisites before execution of test steps. Nevertheless the way to do it is slightly different between manual tests and automated tests.

In the case of a manual test, the acceptance tester often need to use the tested application to set-up the pre-requisites of the test. In the example of an Man/Machine Interface permitting to create and consult users accounts: Before being able to test the consultation function, we need to create a user account.
This method has a major inconvenient: the qualification of a function depends directly of the good functioning of another function. In the previous example, if the creation function has a blocking issue, consulting function can't be tested.

Unlike manual tests, automated tests allow to set-up pre-requisites of the test case without going through the tested application. In the previous example, it's possible to create the account directly in the database, before consulting it with the application. So, the consultation function can be tested, even if the creation function doesn't work.

Test Steps

Step's tests progress is similar for manual tests and automated tests.
For each step test, the acceptance tester or automate interact with the SUT (System Under Test) and compare obtained result with expected result.

Post-conditions Checking

In some test cases, the execution of test steps is not enough to verify the good functioning of the SUT. The state of the system after the test steps progress must be verified too : it's about execution post-conditions. Most of the time it consists in verifying persistent data test inside a database or inside a file.

During a manual test, postconditions are often difficult to verify. Just as the setting-up of pre-requisites, the acceptance tester must use the tested application. In the previous example, the only way for the acceptance tester to verify the account creation is using the tested Man/Machine Interface.

In an automated test, verification of post-conditions can be done independently of the tested application. The account creation will be verified consulting directly the database, instead of using the Man / Machine Interface to consult the account. So, it's possible to test a creation function even if the consultation function doesn't work.

In this way, automated tests never use the tested application to verify post-conditions.

Cleaning Up

In some cases, the test also can have a step of system cleaning, after post-conditions verification. It allows to be sure that the tested system is reseted before the execution of the next test case. This step can be omitted when the step of setting-up pre-requisites is enough to guarantee the state of the SUT. When this step exist, it's executed whatever the test result (success, failed, error).

Results Storage

The result of each test case is saved after the execution.
3 kind of results are possible for an automated test :
  • Success
  • Failed : an assertion step failed (obtained result different of expected result)
  • Error :  an error occurs during the test execution

In the two last cases, saved result has a short explanatory message that allows to identify where the test crashed and if possible the reason of the crash.

Test Campaign

Some test preconditions are common to all test cases and doesn't need to be implemented between each test. Those conditions are setting-up once for all at the begin of a campaign. After that all test cases are executed. And after the campaign execution, it can be necessary to clean-up the test environment (Cleaning-up the database, stopping server programs needed for the tests execution...)
At the end of an execution campaign, an execution report is created from tests results. This report describe the result of each test case: success, failed or error with an explanatory message in the two last cases.

Here is a figure showing the different steps of an automated tests campaign execution :