TDD with Acceptance Tests and Unit Tests

Posted by Uncle Bob on 10/17/2007

Test Driven Development is one of the most imperative tenets of Agile Software Development. It is difficult to claim that you are Agile, if you are not writing lots of automated test cases, and writing them before you write the code that makes them pass.

But there are two different kinds of automated tests recommended by the Agile disciplines. Unit tests, which are written by programmers, for programmers, in a programming language. And acceptance tests, which are written by business people (and QA), for business people, in a high level specification language (like FitNesse www.fitnesse.org).

The question is, how should developers treat these two streams of tests? What is the process? Should they write their unit tests and production code first, and then try to get the acceptance tests to pass? Or should they get the acceptance tests to pass and then backfill with unit tests?

And besides, why do we need two streams of tests. Isn’t all that testing awfully reduntant?

It’s true that the two streams of tests test the same things. Indeed, that’s the point. Unit tests are written by programers to ensure that the code does what they intend it to do. Acceptance tests are written by business people (and QA) to make sure the code does what they intend it to do. The two together make sure that the business people and programmers intend the same thing.

Of course there’s also a difference in level. Unit tests reach deep into the code and test independent units. Indeed, programmers must go to great lengths to decouple the components of the system in order to test them independently. Therefore unit tests seldom exercise large integrated chunks of the system.

Acceptance tests, on the other hand, operate on much larger integrated chunks of the system. They typically drive the system from it’s inputs (or a point very close to it’s inputs) and verify operation from it’s outputs (or again, very close to it’s outputs). So, though the acceptance tests may be testing the same things as the unit tests, the execution pathways are very different.

Process

Acceptance tests should be written at the start of each iteration. QA and Business analysts should take the stories chosen during the planning meeting, and turn them into automated acceptance tests written in FitNesse, or Selenium or some other appropriate automation tool.

The first few acceptance tests should arrive within a day of the planning meeting. More should arrive each day thereafter. They should all be complete by the midpoint of the iteration. If they aren’t, then some developers should change hats and help the business people finish writing the acceptance tests.

Using developers in this way is an automatic safety valve. If it happens too often, then we need to add more QA or BA resources. If it never happens, we may need to add more programmers.

Programmers use the acceptance tests as requirements. They read those tests to find out what their stories are really supposed to do.

Programmers start a story by executing the acceptance tests for that story, and noting what fails. Then they write unit tests that force them to write the code that will make some small portion of the acceptance tests pass. They keep running the acceptance tests to see how much of their story is working, and they keep adding unit tests and production code until all the acceptance tests pass.

At the end of the iteration all the acceptance tests (and unit tests) are passing. There is nothing left for QA to do. There is no hand-off to QA to make sure the system does what it is supposed to. The acceptance tests already prove that the system is working.

This does not mean that QA does not put their hands on the keyboards and their eyes on the screen. They do! But they don’t follow manual test scripts! Rather, they perform exploratory testing. They get creative. They do what QA people are really good at—they find new and interesting ways to break the system. They uncover unspecified, or under-specified areas of the system.

ASIDE: Manual testing is immoral. Not only is it high stress, tedious, and error prone; it’s just wrong to turn humans into machines. If you can write a script for a test procedure, then you can write a program to execute that procedure. That program will be cheaper, faster, and more accurate than a human, and will free the human to do what humans to best: create!

So, in short, the business specifies the system with automated acceptance tests. Programmers run those tests to see what unit tests need to be written. The unit tests force them to write production code that passes both tests. In the end, all the tests pass. In the middle of the iteration, QA changes from writing automated tests, to exploratory testing.




Comments

Leave a response

  1. Avatar
    Sebastian Kübeck 34 minutes later:

    There’s actually more to do for QA people than extending acceptance tests to expose bugs:

    Write Load Tests (and make sure that they’re run periodically) to make sure that the performance requirements are met. It’s also one of the few possibilities to expose concurrency issues.

    Write Security Tests: Tests that mimic an intruder that attacks potential security holes (e.g. injection, XSS attacks etc.).

  2. Avatar
    Christopher Gardner about 3 hours later:

    James Shore has an interesting article favoring Fit as an analysis and communication tool over an executable specification.

    http://www.jamesshore.com/Blog/Five-Ways-to-Misuse-Fit.html

  3. Avatar
    David Chelimsky about 4 hours later:

    I agree that manual testing as your sole source of testing is immoral.

    I think, however, that manual exploratory testing is a great way to expose holes in automated test suites.

    WDYT?

  4. Avatar
    unclebob about 5 hours later:

    Agreed. (As I said in the original article ;-)

  5. Avatar
    Dean Wampler about 5 hours later:

    I like to point out that the “redundant” coverage from unit and acceptance tests is like the security philosophy of defense in depth. You’re safer if you’re not relying on just one “thing”.

    That’s not the primary reason for doing the two kinds of testing, of course, but it is an important benefit.

    IMHO, the role of QA as the driver of the acceptance tests isn’t getting as much attention as it deserves. Too many organizations that are adopting the developer practices like test-driven unit tests still have old-style requirements documents and “passive” QA teams that only get involved at the end of the process. QA has always been the natural advocate between the customer’s requirements and the technology that implements them. QA-driven ATs really emphasize that role!

    Finally, a particular kind of load test worth mentioning is the “smoke test”, where the system is run under a simulated load, hopefully on a near-production configuration, for some multiple of days to see what melts down.

  6. Avatar
    David Chelimsky about 8 hours later:

    Bob – I did read that, but lost it in the shadows of “Manual testing is immoral,” which reads (to me) as all-encompassing.

    Thanks for the clarification.

  7. Avatar
    Peter Wood about 11 hours later:

    @Dean Wampler

    I take ‘smoke tests’ to be a basic, small, base line set of tests which, if they can’t be run, indicate that no further tests need to be run until some serious issues are sorted out. e.g. the environment isn’t set up correctly.

  8. Avatar
    Ben Simo about 15 hours later:

    Bob,

    You had me until “Manual testing is immoral.” This is wrong—and it doesn’t jive with the previous paragraph about exploratory testing.

    Scripted manual testing that is high stress and tedious may be immoral. But including no manual testing may also be immoral.

    People and machines have different strengths and weaknesses. Good manual testing does what people do better than computers. Good automated testing does what computers do better than people. Making people work like machines or expecting machines to think and create like people is wrong.

    Manual testing is error prone, but so is automation—because it is developed by us fallible humans. A human tester may make a mistake once and it impacts a single test execution. A human test coded may make a mistake once at it will impact all executions of that test until the mistake is recognized and fixed. Maybe we should practice TDD for developing our TDD tests, and TDD for the tests that test our tests, and … :P

  9. Avatar
    unclebob 1 day later:

    Ben,

    Correction accepted. Scripted manual testing is immoral.

  10. Avatar
    Gary Williams 2 days later:

    I have to disagree to some extent. In order for the business user to accept that the tests written actually reflect reality, the tests themselves must be tested. So automated tests aren’t, by themselves, sufficient proof that the application is correct and meets the requirements, especially if coded by the same developers. They do facilitate regression and, once the application has passed initial testing, make any retesting trivial. I have been bitten by developers making hidden (and different) assumptions than the testers about what something means resulting in a green test when things are still wrong.

    Of course, this would not be a case where there would be a script anyway, since presumably a new/changed feature would need a new/changed script.

  11. Avatar
    Dean Wampler 2 days later:

    @Peter,

    You’re right. The term “bake test” is a better term for the kind of test I was describing. “Smoke test” is a better term for the fast tests that find obvious problems quickly, as you said.

  12. Avatar
    Joe Ocampo 8 days later:

    >Programmers use the acceptance tests as requirements. They read those tests to find out what their stories are really supposed to do.

    I couldn’t agree with you more. We recently began the NBehave project to address this very issue. It’s focus is to drive the point that in order for a story to be accepted by the development group there must be accompanying acceptance test.

    http://www.codeplex.com/NBehave

  13. Avatar
    Elisabeth Hendrickson 10 days later:

    Fabulous essay! I like Ben’s rephrase to “Scripted manual testing is immoral.” But I’d actually prefer to see, “Manual regression testing is immoral.” If scripted, it’s immoral for the originally stated reasons. If unscripted, it’s immoral because it’s not going to provide enough information, fast enough, to support Agility.

  14. Avatar
    Hans-Eric Grönlund 13 days later:

    Great post, the best I’ve read on the subject.

    In my experience it’s difficult to get business people do programming, even in a high level language such as FitNesse. Therefore, my development team usually helps in the process of designing the acceptance tests, then turn them into automated tests using the same framework as our unit-tests. That process has lots of great side-effects, and it’s our assurance of quality.

  15. Avatar
    Gary Murchison 20 days later:

    So what is the outcome of the outcome of the unspecified / underspecified areas of the system that the “creative” tests expose. Should these be tackled (with new automated acceptance tests) within the same iteration or deferred until a later iteration?

  16. Avatar
    Diego Sacchetto 20 days later:

    One of the differences that Uncle Bob raised is between unit and acceptance testing. They both are automated tests. Manual testing falls under a completely different category and methodology. Agile promotes automation, after all we are in the software business (which is also an automation tool on its own) therefore everything should be automated as much as possible. How good we are to provide full automation and full coverage for requirements (acceptance tests) and functional logic (unit tests) will determine how much agile we are. The more we craft the art of automation the more we will replace the manual testing. As humans we don’t like boring repeated tasks. Think how many times on a straight road with a speed limit, you start your cruise control, simply because it is to boring holding the accelerator and not steering. Moreover people on the robotic industry are trying to design self-driven cars which purpose is to reduce car accidents by eliminating human errors. I do believe manual testing could be removed almost completely no matter the application domain. People are investing a lot of times and energy to provide frameworks to test anything (in the business I work: digital tv applications – we can automate the actions of a user that is using a remote control. Pretty impressive I would say). To achieve this goal we must first change the traditional meaning of testing. With Agile (TDD in particular) testing becomes a software development methodology. Therefore, traditionally, a QA person didn’t need to know to much about programming, but today to write acceptance tests you do need to know how to program and often in the same language that the application is written. But if you follow agile practices then the collaboration between QA and developers is a natural fact. Let’s no forget that the ultimate benefit of automation is more appreciated if we introduce the XP practice of continuous integration. I agree with Uncle Bob. As humans is more fascinating to study strategic way to work smarter rather then repeating over and over manual tasks. Keeping people self-motivated is the key, and for sure manual testing doesn’t do that.

  17. Avatar
    amyguita1@yahoo.com 11 months later:

    I have a question for you. Should developers regression tests their unit tests?

    Thank you.

  18. Avatar
    joe 11 months later:

    When it comes to specifying requirements as acceptance tests, does anyone even see a need for formal use case or requirements documents?

    To me, it seems like if you are truly attempting to adopt a leaner process, then creating the test cases should be enough of a specification.

    My company is trying to create formal use case documents that tend to be difficult to follow and use that as a specification for development. I think that creating the test cases would be a more valuable artifact. Creating the cases helps elicit requirements from the business, aids in analysis, delivers an automated checklist for developers to follow, builds up your regression suite and essentially creates an artifact that serves as a record of what was built.

    Of course, good analysis would go into the creation of the test cases, so that they accurately reflect what the business wants, but this analysis doesn’t necessarily need to be driven by formal use case documents.

    Thoughts?

  19. Avatar
    victor about 1 year later:

    Your immorality claim on manual software testing is absurd. Software creation can also be high stress, tedious, and is obviously error prone. Yet despite this you claim that it is what humans do best. Clearly those aren’t the reason why manual testing is immoral or software creation would in turn be immoral.

    The only unique value claim that you make is that it is wrong to turn humans into machines, but you have no justification for this. By this reasoning most manufactured goods are also created immorally since sewing clothes or crafting vehicles could also be done by machine but may not be due to cost.

    I realize that I am replying to a post that was made years ago and yet I was still directed here by google. I hope that others won’t read this statement and mechanically agree with your silly claim.


Comments