Testing
Course Content Specification
Describe, exemplify and implement a comprehensive final test plan to show that the functional requirements are met.
Identify syntax, execution, and logic errors at this level.
Describe and exemplify debugging techniques:
dry runs
trace tables/tools
breakpoints
watchpoints
‘Good’ software is…
A philosophical view…
Purpose of Testing
What is the purpose of testing any product?
Is it Fit for purpose - does it meet the specification
Is it robust
Is it reliable
Testing has to follow a plan.
It has to be systematic
it has to be comprehensive
What are we testing?
What is the function we want to test
What data/actions do we need to perform to test this?
What is the expected (correct) output?
In an ideal world testing would be exhaustive – meaning that every single possible condition would be tested. This is unfortunately not the case. However your testing can be comprehensive. One way is to ensure that you use:
a wide range of test data
test modules/functions individually
test that modules and functions integrate correctly with other modules/functions.
Test Data
Although covered at National 5 you must remember that when testing programs you have to use as wide a range of test data as possible. You should also ensure you test a range of types of data, they are detailed in the table below.
Types of Test Data - Extreme data can also be thought of as boundaries
Testing table
Imagine a program that accepted and graded pass marks between 0 and 20, with a pass is 12 or more.
You should show for each input what the expected output is and the actual output and make a comment.
This may be as simple as ‘As Expected’. A sample is shown below:
A suitable test table
Phases of Testing
There are many types of testing during a project but broadly there are two main phases of testing:
Alpha
Beta
Alpha Testing
This is internal testing
it is performed Pre-Release
Is completed by the development Team
Beta Testing
This tests a program in working conditions
Is completed by members of public or an independent test group
Feedback is provided to the development team
Types of Errors (N5 recap)
Syntax
These are mistakes in the programming ‘grammar’ e.g. Forgetting to put a semi colon or close a bracket.
Programs will fail translation so won't run
Execution
These are errors detected during run-time. Divide by zero etc.
Programs will usually fail during runtime
Logic
These are errors in the design of the program
Passing the wrong data into a function
Programs will usually fail during runtime or produce incorrect output.
Test Plan
The purpose of a test plan is to ensure that you are systematically and comprehensively testing your system
The test plan must test the functionality of the software
A successful test is one that finds errors!
Try to find the errors!
What are your test cases?
This can be thought of as the ‘user’ interaction with your system
What test data will you use - your system may not be populated. You may need to alter files or create data to test with.
What data will you use in your system to ensure that your tests are valid?
Debugging Techniques
What if your program demonstrates an error but you cannot find the source of it
There are various debugging techniques/tools that you can use to assist you in locating the error.
Dry Runs
Trace Tables/Tools
Breakpoints
Watchpoints
Dry Runs and Trace Tables
A dry run is when you manually stepping through the lines of code without the need for executing the program. Can be useful for logic errors.
A dry run may also be used in conjunction with a trace table:
Trace Table Example
Let's look at the following code which should be a find minimum algorith, BUT THERE IS AN ERROR.
The file below is using the following array
scores = [5,9,2,6,1,12]
When you create a trace table you list the variables and then populate it as you work through the algorithm or after each iteration through a loop e.g.
Using the values values = [5,9,2,6,1,12]
It is accidentally performing a find max, we would be expecting the variable minpos to be 0, then 2 then 4.
Expressions in Trace Tables
Expressions can also be included in trace tables. These can be useful for debugging logic errors.
In the trace table to the left the statement
if values[x] > values[minpos] has been evaluated as True or False
Breakpoints
You set breakpoints to stop execution of the code at specific lines of code. At this point the contents of variables can be examined.
This can be a method to ascertain if a piece of code is actually being executed, obviously useful for debugging conditional statements. As you can check if the variables match the conditions you think they do.
The screenshot to the left shows that the execution is halted at line 34 and the contents of the variables can be inspected.
Useful for debugging – particularly Logic Errors
Watchpoints
Watchpoints are similar to breakpoints but they are assigned to variables. They then stop the execution of the program when a variable changes or when the contents of a specific variable(s) or expression meets a particular condition. This can be at any point in the execution of the code.
This can be useful for example if file import is failing at record 1200/2000 you could stop the program when the counter variable is equal to 1199 so that you can examine variable contents at that point.
Or you may want to halt the program only when a variable changes.
Will halt the execution of the program when the variable counter is equal to 1