When the source code has been generated, software must be tested to detect and correct errors. So that user gets a software with minimum errors. Software testing is the process of testing the software product. A good software test will help to the delivery of high quality software product. This will lead to more satisfied users and low maintenance cost.
The following are some commonly used terms associated with testing.
An error (defect) is deviation from desired output. An error refers to the difference between actual output and expected (correct) output.
A fault (bug) is caused by an error. Fault leads to the failure of software. Failure is the inability of a system or its component to perform a required function according to its specifications.
A test case is a set of inputs and set of expected outputs on a system. A test suite is set of all test cases.
Verification and Validation: Verification is the process of determining whether one phase of a software product conforms to its previous phase. Verification is done phase wise. Whereas the validation is the process of determining whether a fully developed system conforms to its requirement specifications. In validation each and every objective stated in software requirement specification (SRS) are checked. In verification this process is done phase wise but in validation this process is done to make final product error free.
Testing Objectives
Following are testing objectives:
As the testing is a costly process so the test cases should be designed such that they have a high probability of finding errors. The test cases which are unable to find an error (if it is there) are simply wastage of time and resources.
Testing Principles
A software engineer must understand the basic principles of testing to design good test cases. Some of them are:
There are two approaches in testing. These are:
1. Functional testing (Black Box testing)
2. Structural testing (White Box testing)
Functional Testing: Functional testing refers to testing which involves only observation of the output for certain input values. It is conducted at the software interface. In this testing it is checked that inputs are properly accepted and outputs are properly produced. How it is done it is not checked.
There is no attempt to analyze the code which produced the output. The internal structure of program is ignored. We can say that program is like a black box. The contents of black box are not known. The function of black box is understood in terms of inputs and outputs. For ex: We can operate computers with black box knowledge.
In black box testing following methods are used for testing:
Boundary Value Analysis: Some typical programming error occurs at the boundary value of the inputs. Ex: The programmer may use <= instead of < or vice-versa. Boundary value analysis leads to the selection of test cases at the boundaries of different equivalent classes. Ex: For some function let 4<=a<=10 then we should check a for 3,4,10,11.
Equivalence Class Partitioning: In this method the domain of input values to a program is partitioned into a set of equivalence classes. This partitioning is done in such a way that behavior of the program is similar for every input data belonging to the same equivalence class. The main idea is that any input from an equivalence class is as any other input. Ex: let -10<=a<=100, then we can make three equivalence classes.
· All inputs less than -10 (class 1)
· All inputs between -10 and 100 (class 2)
· All inputs above 100 (class 3)
In class 1 -20 is as good as -25, in class 2 -5 is as good as 95 and in class 3 200 is as good as 1000.
There are some general guidelines for making equivalence classes:
If the input is given in the form of range of values then three equivalence classes are made like above.
If the input can take some discrete values then two classes are formed, one for valid and one for invalid inputs. Ex: let a ={ 1,4,7} then two equivalence classes can be made, one for a={1,4,7} and one class for all other integers.
Decision Table based Entry: Decision tables are useful for describing situations in which a number of combinations are taken under varying set of conditions. In decision table there are four parts:
1) Condition Stub: Sets forth in question from the condition that may exist. Its location is upper left corner.
2) Action Stub: Outlines in narrative form the action to be taken to meet each of the condition. Its location is lower left corner.
3) Condition Entries: Provides the answers to questions asked in the condition stub quadrant. Its location is upper right corner.
4) Action Entries: Indicates the appropriate action resulting from the answers to the conditions in the condition entry quadrant. Its location is lower right corner.
Take an example of transferring money online to an account which is already added and approved.
Here the conditions to transfer money are ACCOUNT ALREADY APPROVED, OTP (One Time Password) MATCHED, SUFFICIENT MONEY IN THE ACCOUNT.
And the actions performed are TRANSFER MONEY, SHOW A MESSAGE AS INSUFFICIENT AMOUNT, BLOCK THE TRANSACTION INCASE OF SUSPICIOUS TRANSACTION.
Here we decide under what condition the action be performed Now let’s see the tabular column below.
Y and N in condition entry show that condition is true or false. X in action entry shows that corresponding action in action stub will be taken.
Cause Effect Graphing Technique
In boundary value analysis and equivalence partitioning combinations of inputs are not checked. They consider only single input conditions. However, the combinations of inputs may result in interesting situations. These situations should be tested.
The following process is used to derive test cases:
1. A cause is an input condition or an equivalence class of input conditions. An effect is an output condition. Each cause and effect is assigned a unique number.
2. The semantic content of the specification is analyzed and Boolean graph is constructed. This is cause and effect graph.
3. The graph is converted to decision table.
4. Each column table represents a test case.
The basic notation for graph is
Structural Testing
White box or structural testing can include test cases that
1. Guarantees that all independent paths within a module have been exercised atleast once.
2. Exercise all logical decisions on their true and false values.
3. Execute all loops at their boundaries and within their operational bounds.
4. Exercise internal data structure and ensure their validity.
There may be a question that when we do black box testing then what is the need for white box (glass box) testing. Following are some of the reasons:
1. Logical errors and incorrect assumptions are inversely proportional to the probability that program path will be executed. Errors may be in the part of the program which is not oftenly executed ex: error conditions.
2. We believe that a logical path is not executed but it is executed regularly.
3. Typo graphical error may be in a path which is not tested by black box testing.
There are several techniques for structural testing. Some of them are:
1. Statement coverage: In this methodology the design case are such that each statement of the module is executed atleast once. The basic idea behind this methodology is that error cannot be detected unless a statement is executed. Or we can say that error existing in one part can’t be discovered until that part is executed. But it may be possible that a statement executed for one value behaves properly for that input value only.
2. Branch testing: In this method different test cases are designed such that different branch conditions are tested for true and false values. Branch testing also guarantees the statement coverage. This is stronger technique than statement coverage.
3. Condition testing: In this method test cases are designed such that each part of a condition of a composite conditional expression is given both true and false values. The branch testing uses simple condition testing strategy. In condition testing compound conditions are given both true and false values.
4. Path testing: In path testing strategy all the independent paths are executed once. The test cases are designed to execute each path atleast once. An independent path is defined in terms of control flow graph (CFG) of a program.
A control flow graph describes the sequence in the different instructions of program are executed.
5. Cyclomatic complexity: Cyclomatic complexity of a program defines the number of independent paths in a module. If a control flow graph has N nodes, E edges then its Cyclomatic complexity V is given by
V= E-N+2
Or V = total number of bounded area + 1
The Cyclomatic complexity provides a lower bound on test cases that must be designed and executed to guarantee coverage of all independent paths in the program.
Test Activities
There are three phases (levels) of testing.
1. Unit testing
2. Integration testing
3. System testing
Unit testing: In unit testing each and every module is tested alone. The focus in unit testing is on the smallest unit of software design. The smallest unit of software design is module. The unit test is white box oriented. For testing, test cases are used and output is compared with expected output. The data structures are examined to ensure that data stored maintains the integrity during all steps in program execution. Boundary conditions are tested to ensure that module operate properly at boundaries. All the independent paths are tested at least once. All the error handling paths are tested.
The main purpose of unit testing is to find and remove as many defects as possible in initial state. The unit testing is preferred because the size of single module is small enough that errors can be located very easily. The module can be tested very exhaustively.
But there are some problems in unit testing. How a module can run without anything to call it, to be called by it. Because module is not a stand alone program driver and stub softwares must be developed for each unit. Driver module is used to call module under unit testing. Usually driver is nothing more than a main program. It accepts test case data, passes data to test program and print results.
A stub is used in place of subordinate routines to the module under unit test. A stub uses subordinate modules interface does some data manipulation and prints successful entry in routine and return.
As drivers and stubs are pure overheads so they should be left as simple as possible. Some components cannot be tested by simple stubs or drivers. So testing for those modules can be postponed until integration testing.
Integration testing: With unit test one can be sure that each module is implemented correctly. But there are chances that interface between modules may not be correct. For this reason integration testing is done. Because the data may be lost across an interface. Integration testing can be done in following ways:
1. Top down integration: This is an incremented approach to testing. In top down integration modules are integrated from top to down, adding one module each time until entire tree is integrated. The modules can be integrated as depth first integration or breadth first integration manner. Following steps are performed:
a. The main module is used a test driver and stubs are replaced for all subordinated components (modules).
b. The stubs are replaced by actual modules depending upon the strategy used i.e. breadth first or depth first are integrated.
c. Tests are conducted as each component is integrated.
d. On completion of each set of tests, another stub is replaced with real module.
e. Regression testing is conducted to ensure that new errors have not been introduced.
2. Bottom up integration: This strategy works from bottom to up until entire tree is integrated. This eliminates the need of stubs. Because the components are integrated from bottom up, so subordinate modules are always available. A bottom up integration can be implemented as:
a. Lower level modules are combined into clusters each cluster performs a specific function.
b. A driver is written to coordinate input and output of the test case.
c. The cluster is tested.
d. Drivers are removed and clusters are combined moving upward.
3. Sandwich Integration: Modules are integrated from top to down as well as bottom to up meeting where in middle.
4. Regression testing: Each time a new module is added in integration testing the software changes ex: new data path are established, new control logic occurs, new input and output may occur. This change may cause some problems with modules previously worked correctly. Regression testing is re-execution of some of the tests that have already been executed to ensure that no new error occurred.
System Testing: As software is one component of large computer system. So the software should be tested on actual hardware on which this software is going to be used ultimately. So the software should be tested to various kinds of hardware it is general purpose software like an operating system. System testing is performed for checking the performance not to find the fault.
During system testing various parameters are checked. Some of them are:
Usable: is the product easy to use.
Secure: Is access to sensitive data is restricted.
Documented: Is software properly documented i.e. manuals are complete and understandable.
Compatible: is existing data is usable.
There are various types of system testing:
· Recovery testing
· Security testing
· Stress testing
· Performance testing
Recovery Testing: It is a system test that forces software to fail in different ways and then verifies that recovery is properly performed. If the recovery is automatic is performed by the system itself, then following parameters are evaluated.
· Re-initialization
· Data recovery
· Check point mechanism
· Restart
If recovery requires human intervention the MTTR (Mean Time To Recover) is recorded to determine if it is between acceptable limits.
Security Testing: It attempts to verify that protects mechanism built into the system will in fact protect it from improper penetration. Customer tries various tests to break the security of the system which should be such that breaking security is much more expensive than the value of information obtained.
Stress Testing: It executes system in a manner that it demands resources in abnormal quantity, frequently, volume. The idea is to break the system under abnormal conditions and note its effect. Variation of stress testing is sensitivity testing, which attempts to detect data combinations which are valid input classes that may cause instability and improper processing.
Performance Testing: It is designed to test the runtime performance of software within the context of an integrated system performance. Performance testing occurs throughout all the steps in testing process. Even at until level performance of an individual module may be assessed as white box testing is carried out.
True system performance can only be determined once the computer system has been integrated and tested. Performance tests are often carried out with stress testing and may require both hardware and software.
Validation Testing
When software is tested then it is ready for validation testing. This is of two types.
Alpha and Beta testing: The alpha test is conducted by a customer at developer’s site. Alpha tests are done in a controlled environment. The testing is done by the customer (user). Developer is always present there.
The Beta test is done at one or more customer site by the user of the software. Unlike alpha testing the developer is not present there. The environment of testing is free and not in control of developer. All the problems encountered are recorded by customer and given to developer at regular intervals. When all the problems are rectified the software is released to use.
Debugging: The debugging is not testing but always occurs as a consequence of testing. When testing is successful and testing detects an error, then process of debugging starts for the removal of error.
The debugging is a different task because
· In highly coupled programs the symptom may seen in one part of the program while the cause may be in the other part of the program.
· Sometimes errors may be removed by removing some other error.
· Sometimes rounding off error may be there.
· Error may be caused by synchronization.