Automated tests progress is nearly similar at the one of manual tests. Nevertheless, automation has specificities conditioning the way which automated tests are designed and realized.
The writing of complex tests increases the risk of error and so of false positive (test fail although the tested application isn't involved).
In the case of a manual acceptance test, it's common to follow long test procedures who are verifying a lot of functionality of the tested system in one scenario.
It's necessary due to the specifics constraints of manual acceptance tests: the acceptance tester is obliged using the tested application to set-up pre-requisites et verify post-conditions. Several system functions are tested like this with an only test.
Automated tests allow departing from those constraints, because the setting-up of pre-requisites and verifications can be done without the tested application. Each automated test can test only one function of the SUT.
This method has many advantages:
Steps of pre-requisites setting-up and environment cleaning-up must assure that test cases are strictly independant of each other. The execution of a test case should never depend of the previous test case's result.
This rule is essential for those reasons:
An automated test must be able to be replayed as many times as necessary and obtain each time the same result.
To make it possible, the simplest solution is to use identic datas from an execution to another. This is particularly true for non-regression tests which are valid only if they are executed in strictly identical conditions. This is possible thanks to the setting-up and cleaning-up steps of the environement.
There are 2 exceptions to the previous rule. Some data tests can't be determined a priori because they depend of the context in which the test case is executed. Among those datas, there are dates and data generated by the application1. DATES
All data set containing dates is subject to expiration. For example, a contract was active when tests were realized can expire after a certain period of time. This can lead to the failure of the tests who are using this data set.
To handle this problem, 2 strategies are possible:
A part of data generated by the tested application can't be determined a priori. It's for example identifiers or timestamps generated at the execution. It arrives that such data in output of a test case must be used in input of the next test case. So, it's necessary to stock them in order to use them later
One of the main brakes to automation tests reside in the need of maintaining them. That's why automated tests concern stable functions of the tested system which are little set to expand.
Despite those precautions, features of the SUT are going to lead of needing maintenance. So, we need to anticipate the features during the realization of the tests in order to minimize the maintenance charge.
1. DATA TEST CENTRALIZATION
Sometimes, because of the evolution of the data model for example, a test case must be revalued.
To minimize the maintenance charge, the data of a test must be centralized. Concretely, it means that the data of a test are replaced by parameters which values are saved in one or several parameters file(s).
So, in the case of a test case reevaluation, only this parameters file(s) are modified and not the totality of test cases.
The common steps of several tests cases must be shared. So if a modification of the SUT affects a common step to several cases, corrections must be made at only one place. This implies :