In the case of a manual test, the acceptance tester often need to use the tested application to set-up the pre-requisites of the test. In the example of an Man/Machine Interface permitting to create and consult users accounts: Before being able to test the consultation function, we need to create a user account.
This method has a major inconvenient: the qualification of a function depends directly of the good functioning of another function. In the previous example, if the creation function has a blocking issue, consulting function can't be tested.
Unlike manual tests, automated tests allow to set-up pre-requisites of the test case without going through the tested application. In the previous example, it's possible to create the account directly in the database, before consulting it with the application. So, the consultation function can be tested, even if the creation function doesn't work.
For each step test, the acceptance tester or automate interact with the SUT (System Under Test) and compare obtained result with expected result.
During a manual test, postconditions are often difficult to verify. Just as the setting-up of pre-requisites, the acceptance tester must use the tested application. In the previous example, the only way for the acceptance tester to verify the account creation is using the tested Man/Machine Interface.
In an automated test, verification of post-conditions can be done independently of the tested application. The account creation will be verified consulting directly the database, instead of using the Man / Machine Interface to consult the account. So, it's possible to test a creation function even if the consultation function doesn't work.
In this way, automated tests never use the tested application to verify post-conditions.
3 kind of results are possible for an automated test :
In the two last cases, saved result has a short explanatory message that allows to identify where the test crashed and if possible the reason of the crash.
At the end of an execution campaign, an execution report is created from tests results. This report describe the result of each test case: success, failed or error with an explanatory message in the two last cases.
Here is a figure showing the different steps of an automated tests campaign execution :