Software Models

Introduction 

V-Model


Many of the process models currently used can be more generally connected by the V-model where the “V” describes the graphical arrangement of the individual phases. The “V” is also a synonym for verification and validation.

The model is very simple and easy to understand. By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another i.e. serve as a base for test activities. So, for example, the system test is carried out on the basis of the results specification phase. The coarse view of the model gives the impression that the test activities first start after the implementation. However, in the description of the individual activities the preparatory work is usually listed. So, for example, the test plan and test strategy should be worked out immediately after the definition of the requirements. Nevertheless it can contribute very well to the structuring of the software development process.

The disadvantage of the model is the coarse division into constructive work (including the implementation) on the left-hand side of the “V” and the more destructive tasks on the right-hand side. Here also the impression may develop that, after the implementation phase, a ready product can be delivered. A planned-in removal of defects and regression test is not given.

 

 

Waterfall-Model



One of the first models for software development is the so-called waterfall-model. The individual phases i.e. activities, that were defined here are to be found in nearly all models proposed since. In this it was set out that each of the activities in the software development must be completed before the next can begin. A return in the development process was only possible to an immediate previous phase. In the waterfall-model, testing directly follows the implementation. By this model it was suggested that activities for testing could first be started after the implementation. Preparatory tasks for the testing were not clear. A further disadvantage is that testing, as the last activity before release, could be relatively easily shortened or omitted altogether. This, in practice, is unfortunately all too common. In this model, the expense of the removal of faults and defects found is only recognizable through a return to the implementation phase.

 

 

W-Model



From the view of testing, all of the models presented previously are deficient in various ways:
• the test activities first start after the implementation
• the connection between the various test stages and the basis for the test is not clear
• the tight link between test, debug and change tasks during the test phase is not clear
In the following, the W-model is presented. This is based on the general V-model and the disadvantages previously mentioned are removed.

The test process usually receives too little attention in the models presented and usually appears as an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second “V” dedicated to testing is integrated into the model. Both “V”s together give the “W” of the Wmodel.

 

Advantages of the W-Model

In the W-model the importance of the tests and the ordering of the individual activities for testing are clear. Parallel to the development process, in a tighter sense, a further process - the test process - is carried out. This is not first started after the development is complete.
The strict division between constructive tasks on the left-hand side and the more destructive tasks on the right-hand side that exists in the V-model is done away with. In the W-model it is clear that such a division between tasks is not sensible and that a closer co-operation between development and testing activities must exist. From the project outset onwards the testers and the developers are entrusted with tasks and are seen as an equal-rights partnership. During the test phase, the developer is responsible for the removal of defects and the correction of the implementation. The early collaboration and the tight co-operation between the two groups can often in practice avoid conflict meetings.

The W-model becomes closer to practice when the test expenditure is given 40% and more. The model clearly emphasises the fact that testing is more than just construction, execution and evaluation of test cases.

 

Disadvantages of the W-Model

Models simplify the real facts. In practice there are more relations between the different parts of a development process. However, there is a need for a simple model if all people involved in a project are to accept it. This is also a reason why the simple V-model so frequently used in practice.
The models of software development presented do not clarify the expenditure needed for resources that need to be assigned to the individual activities. Also in the W-model it appears that the different activities have an equal requirement for resources (time, personnel, etc.) In practice this is certainly not the case. In each project the most important aspects may vary and so therefore the resource allocation is unlikely to be equal across activities. For highly critical applications the test activities certainly have higher weighting or at least equal weighting with other activities.

Spiral-Model

In the spiral-model a cyclical and prototyping view of software development was shown. Tests were explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase was divided into stages. The test activities included module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects.

 

 

Extreme Programming



A further model of software development is currently frequently discussed: Extreme Programming. Taking a simplistic view of the model one could say that extreme programming does not use specifications. The test cases initially defined are used as a description of the requirements. These are then used after the implementation to help check the (sub-) product.
The idea in this excerpt from Extreme Programming can also be found in the W-model: the left part of the “W” can simply be omitted. This then leaves just the testing activities as tasks up to the point of implementation. The requirements for the system to be developed are then extracted from the specified test cases.

 

Extreme Programming Questions and Answers -- from xprogramming.com

How is Software Quality Assurance and Software Configuration Management integrated into Extreme Programming?

XP defines two levels of testing. The first is unit testing, which must be performed by the programmers as they work. Each class implemented must have programmer-developed unit tests, for everything that "could possibly break". These tests are to be written during coding of the class, preferably right before implementing a given feature. Tests are run as frequently as possible during development, and all unit tests in the entire system must be running at 100% before any developer releases his code. (By release, we mean transferring from his own code space to the code integration area. This is handled differently, of course, depending on the code management tools in place.)

The second level of testing is called functional testing. Each feature of the system (which is defined by something we call a User Story, rather like a Use Case) must have one or more functional tests that test it. The functional tests are the responsibility of what we call the "customer", the body responsible for defining the requirements.

The implementation and running of functional tests can be done by the Software QA group, and in fact this is an ideal way to do it.

 

Within XP, are there any specification baselines, test baselines, QA Acceptance testing, and CM Release Management/Change Control?

XP is an inherently incremental process, with software being released to "production" as frequently as possible. This generally means that programmers release their work to the common development pool approximately daily, and that means that if a full system were built on that day, their code would be included in that build. The time period between full system builds varies depending on the environment: since you have chosen a particularly difficult integration language (C++), I could imagine that you would build less frequently. We would recommend, however, that the full system be integrated as often as possible, at least daily. (This may seem aggressive to you. We'd have to talk about what is possible in your environment.)
Since XP is incremental, developers are working in short time increments we call iterations: we recommend about three weeks. Features (user stories) are broken down to the point of detail that allows a developer and his partner to implement the stories they're working on in that time period. We like the functional tests for that iteration to be complete and available no more than half-way through the iteration. (This usually means that QA is writing tests for the next iteration while this one is going on.)

All through the iteration, programmers can use QA's functional tests to determine whether they have met the requirements. (They are also using their own unit tests to determine whether their individual classes are doing what they should. This is usually at a much finer level of detail.)
Baselines work this way: when the code for a story is released, all the functional tests for it should be in place, and will ideally be working. Inevitably some will not, especially with teams just beginning with XP. One of the quality measures in the process is the daily graph of performance on functional tests. The general shape of this graph, over the course of the full system release period, is that of two s-curves: the upper curve is the total number of tests written, the lower curve is the number running at 100%. A healthy project of course shows these curves coming together at 100% by the end of the schedule.

The code management software needs of course to reflect the requirements scheduled for release. This is determined by the "customers", as part of the planning components we call the commitment schedule (overall plan for a major release) and the iteration plan (plan for a (three week) iteration). The baseline of what is in the system tracks what is actually requested by the customers. Development doesn't care whether this is new functionality or a change to old. They don't care whether a given user story addresses something that was planned for or not. XP is completely flexible with regard to change management: development merely estimates how long any desired feature will take, and works on it when "customer" schedules it into an iteration. (Dependencies of course exist, but we find that far fewer exist than most developers believe. Drilling into that subject is beyond the scope of this email.)

When do the all the customer sign-offs occur?

Customer sign-off is continuous. Each iteration has its functional tests. Everyone is fully up to date on which tests are working and which are not. If tests scores are trailing implementation by too much, the customer will inevitably schedule more work against older features that are incorrect (or whose requirements have changed). When test scores are tracking implementation, the customer knows it and is comfortable requesting new functionality.
Because the test scores are public and visible, everyone has the same level of understanding of where quality is. Generally scores are showing a good curve toward release, and everyone gets increasing comfort as the release date shows up. And, of course, if tests are not tracking, everyone knows that and the priority of getting things right naturally increases.
The overall idea of this part of the process is to provide the most rapid feedback possible to everyone, customers and developers alike. That's why we like all the functional test run every night. Next morning, if anything has been broken the day before, everyone knows it and can deal with it effectively (since it was only yesterday's work that could be the problem). The faster the feedback, the faster development of quality software can proceed.

 

What are the Quality Assurance and Software Configuration Management roles and responsibilities with Extreme Programming?

We prefer for there to be a separate organization for functional testing (probably exactly like your QA function, with testing results made public very quickly). XP, however, only says that there must be functional tests: it does not specify organizationally how they must be done. Experience is that testing is best done by a separate function - but one that is very tightly integrated with development rather than at the end of a long pipeline.

Configuration management is also up to the team. It is usually necessary to have one or more individuals responsible for CM. We have no special rules or practices addressing how a group would manage the requirement to build multiple systems from one code base. Our main approach would be: for each release configuration, there must be corresponding functional tests, and these must be run before that configuration is released to the (real) customer. We would think that development would proceed by running kind of a "union" of all the functional tests of all the configurations.
We'd probably have to talk more specifically about how your specific organization needs to build configurations to say much more about that.

 

Do you use IEEE, SEI, ISO9000 standards as references to acquire the fundamentals of defining accurate requirements for customers and software engineering users? How can a person write storyboards without having the basics of pinpointing and developing sound requirements?

We would agree that those who play the customer role have to know what they want. We do not, however, recommend any particularly formal requirements writing or recording mechanism. Instead, what we are working toward (XP is for small teams, after all) is to have a clear understanding in the heads of customers, developers, and testers as to what is wanted. Rather than have, say, an "analyst" sit down with the customer and laboriously translate his mumblings into something representing what is wanted, and then having a "designer" take the analysis and build a design, and so on, small teams function best if the customers and designer/developers talk to one another until they develop a common vocabulary of what is needed and how it will be done. In XP, we would like to have a common level of understanding in all heads, each focused on its own particular interests: Customers: what's needed, what's the business value, when do we need it? Developers: what's needed, how can I build this, how can I test my code, how long will it take? Testers: what's needed by the customers, how can I test whether developers have done it? As you can see, the testers' functional tests are what close the loop, assuring everyone that what was asked for was what we got. The best way to do XP is with a separate functional testing organization that is closely integrated into the process. It would be delightful to have that organization run by an experienced QA manager trained in XP.

 

Is Extreme Programming not for Software Quality Engineering and Software Configuration Management practitioners.

XP is a development discipline that is for customers (in their role as specifiers and their role as investors and their role as testers and acceptors) and for developers. As such, the Quality Engineering and Configuration Management roles are critical to the effort. They have to be assigned and played in a way that is consistent with the mission of the group, the level of criticality of quality, and so on. We'd need to talk in detail about your situation to see just where the XP terminology connects with yours, but your QA functions need to be done in any effective software effort, whether in a separate organization or not. So XP certainly is for software quality engineering and software configuration management, as part of a healthy overall process.
That said, XP is aimed at smaller projects (like yours) and it sounds like yours has a much higher level of QE and CM than is often seen in companies of your size. That should give you a strong leg up in building quality software, and we should strengthen your contribution to quality as we bring XP into the team.

 

Download

Types of Testing

One Stop Testing 

 

 

Home Page 

Test Driven Development

Team Motivation

Scrum Checklists