Introduction

One of the core principles of software development is to get continuous feedback about health of the product and the project. Since the absence of feedback implies assumption, hence continuous feedback is desired to know exact status. The correctness of the software under development is one of the most important feedback. Manually testing the software is the best way to get this feedback [1]. There is nothing more reliable than a good tester using the software and diagnosing problems. But complete reliance of manual testing can create certain issues.

Slow

Assuming a developer commits multiple times during the day. He is in best position to correct his mistakes when he is working on it. The more delayed this feedback comes longer it takes to fix because of loss of context.

A team frequently releases software to production. While the new features should be working as expected, the existing ones should continue to do the same. This means that as the software grows older it has more features to test.

In both the cases mentioned above manual testing cannot be solely relied on for cost as well feasibility purposes.

Diminishing returns

As software matures and becomes more complex, it becomes stable in most of its parts because of lack of changes to the concerned modules of code. A system with good architecture must allow for mental partitioning of the whole system, using which one can co-relate which parts of system need test. This can be drilled down to specific test cases as well. In an ideal world the amount of testing required should be proportional to the number of modules changed. While on one hand one doesn't always get such architectures on the other testers are (and they need to be) pessimistic. In my experience in most mature systems one sees lot more testing than the amount of defects found.

Not the best use of smart people

It can get boring for manual testers to test the same thing again and again. This aspect severely affects the quality of testing as well as demotivates people.

Automation of testing

Automated functional testing can assist[2] manual testing to solve some of the above problems. Automated tests, unit, functional and everything is between is extensively used in agile projects. The vocabulary of classes of automated tests is extremely rich and confusing. Let me just list down a quick list that I have seen used in my workplace. Functional, Smoke, Unit, Integration, System, Acceptance, Component, Black box, White Box, End to end, Regression, etc. The list goes on and on. We are here interested in two classification of automated tests.

Reach of the test

Automated tests can have the knowledge of the code base or can be completely agnostic of it, aka white box and black box tests respectively. In most agile projects there is a wide spectrum between testing a method in a class and testing the whole application from the user/machine interface. Lets take an example of web application.

For the sake of discussion let's consider a system whose logical layers look like following:

This application has four tiers and some tiers has different layers performing different role. Each of the layer consists of programs. So lets look at some of the ways in which this application can be tested using automated tests.

  1. Unit level test for a method in a class.[3]
  2. Test verifying the mapping of domain objects with the database. These tests hit the database at the runtime.
  3. Test calling the web service methods using the client, possibly aimed at testing web service operations along with their interaction with database.
  4. Java script unit tests
  5. Http Unit kind of test running against the web presentation layer by issuing http requests and analyzing the response.
  6. Selenium based executing again the entire application executing it through web browser.
  7. A web service test which stubs out the external service.

While 1 & 4 are unit tests, and 6 is a functional test. 2,3, 5 and 7 are neither. We would use the term integration test (not really the best name, but quite common) for these kind of tests. These test a part of application in isolation from other parts.

When the test is run

Following are points at which automated tests are run.

  1. Developer runs them during development
  2. Developer runs them before commit
  3. Continuous Integration server runs them after commit
  4. Automated tools after successful continuous integration build

Typically unit tests and integration tests are run as part of the continuous integration build. Also some functional tests which do figure in continuous integration build are called smoke tests and otherwise regression tests. There would more detail later, on these classification.

[1] Correctness of software should not be confused with usefulness of software. Useful but imperfect software is any day better than defect free software which doesn't do what the user wants. Manual testing doesn't ensure useful software as much as it ensures correctness. Getting users involved early in the cycle of software development is something which is catching up.

[2] I want to stress here that test automation should not be adopted to replace manual testing. In my experience, any complex and important software manual testing is essential and have never seen automation replace manual testing.

[3] This doesn't include the kind of tests which intend to test a method in a class but because of bad dependency management end up loading whole lot of classes which the class-under-test is dependent on.