======== Acceptance PUT Patterns ============
Open Questions
Whenever possible, write your acceptance tests as a PUT or any other form to be automatically executed to check that your tool (or Pex enhanced with your tool) is behaving in an expected way after you change your tool code base. For the PUTs described below, you may need to add annotations for your PUTs to instruct Pex to use more runs for achieving your goals if your goals require more than the default runs to achieve.
Reachability Acceptance Test
If you expect a code location of your micro-benchmark or code under test to be covered by your tool (or Pex enhanced with your tool), put in PexGoal.Reached() in this location. Then write a PUT that invokes this micro-benchmark or code under test. Then annotate your PUT with [PexExpectedGoals]. The PUT will fail when Pex cannot generate test inputs to cover any PexGoal.Reached().
[PexMethod]
[PexExpectedGoals]
public void PUTForYourCodeUnderTest(int input)
{
MethodUnderTest(input);
}
public void PUTForYourCodeUnderTest(int input)
//you can inline this method body in your PUT
{
if (input * input > 10000)
//here is the branch that this PUT intends to cover
PexGoal.Reached();
}
Coverage Criterion Acceptance Test
Assume that you expect some coverage criterion such as boundary coverage in a branch of your micro-benchmark or code under test to be satisfied by your tool (or Pex enhanced with your tool), such as the branch (x > 100) below
public int MethodUnderTest(int x)
{
if (x > 100)
{
...
}
return x;
}
You shall insert a monitoring method invocation such as CheckBVCFor1stBranchMethodUnderTest(x)before the branch. In this monitoring method, include a branch for checking coverage criterion checking condition and put PexGoal.Reached() in the true branch such as the one below:
public int MethodUnderTest(int x)
{
CheckBVCFor1stBranchMethodUnderTest(x);
if (x > 100)
{
...
}
return x;
}
public void CheckBVCFor1stBranchMethodUnderTest(int input)
{
if (input == 100)
PexGoal.Reached();
}
Note that to avoid the Pex-path-exploration interference of the additional branches introduced by your monitoring method, you should instruct Pex to avoid the exploration of paths in your monitoring method, to do that, you have two options:
Option 1: You add [assembly: PexSuppressUninstrumentedMethodFromType(typeof(MONITORINGCLASS))] in your PexAssemblyInfo.cs but it requires that you put all monitoring methods in your MONITORINGCLASS and this class includes only the methods whose body shouldn't be explored.
Option 2 (recommended): You add
PexSymbolicValue.IgnoreComputation(() =>
CheckBVCFor1stBranchMethodUnderTest(int));
to the beginning of your PUT.
Finally, you need to write a PUT that invokes this micro-benchmark or code under test. Then annotate your PUT with [PexExpectedGoals]. The PUT will fail when Pex cannot generate test inputs to cover any PexGoal.Reached().
[PexMethod]
[PexExpectedGoals]
public void PUTForYourCodeUnderTest(int input)
{
//if Option 1, you don't need to have the following statement
PexSymbolicValue.IgnoreComputation(() =>
CheckBVCFor1stBranchMethodUnderTest(int));
MethodUnderTest(input);
}
Differential Acceptance Test
For regression test generation or mutation killing, similar to a Coverage Criterion Acceptance Test, you can write a monitoring method to check whether P, I, or E condition of the PIE model is satisfied. Remember to annotate to instruct Pex to ignore your monitoring method (except for E).
public int MethodUnderTestVer1(CUT c)
{
...
x = x * (y + 1);
...
return x;
}
public int MethodUnderTestVer2(CUT c)
{
...
x = x * y;
...
return x;
}
To check E, you can put PexGoal.Reached() before the changed location and write a PUT annotated with[PexExpectedGoals]:
public int MethodUnderTestVer2(CUT c)
{
...
PexGoal.Reached();
x = x * y;
...
return x;
}
To check I (e.g., weak mutation testing), you can add a monitoring method whose method body invokes PexGoal.Reached() before the changed location and write a PUT annotated with[PexExpectedGoals] and ignore the exploration of the monitoring method:
public int MethodUnderTestVer2(CUT c)
{
...
CheckInfectionFor1stChangeMethodUnderTest(x, y);
x = x * y;
...
return x;
}
public void CheckInfectionFor1stChangeMethodUnderTest(int x, int y)
{
if ((x * y) != (x * (y + 1)))
PexGoal.Reached();
}
[PexMethod]
[PexExpectedGoals]
public void PUTForYourCodeUnderTest(CUT c)
{
//if Option 1, you don't need to have the following statement
PexSymbolicValue.IgnoreComputation(() =>
CheckInfectionFor1stChangeMethodUnderTest(int, int));
MethodUnderTest(c);
}
Note that when the difference is on a branch condition, e.g., from if (x > y) to if (x >= y) then the monitoring method shall look like below (since the weak mutation killing condition is to cause the two versions to take different branches):
public void CheckInfectionFor1stChangeMethodUnderTest(int x, int y)
{
if ((x > y) && !(x >= y)) ||
!(x > y) && (x >= y)))
PexGoal.Reached();
}
To check P (e.g., strong mutation testing), you can add a monitoring method whose method body invokes PexGoal.Reached() before the changed location and write a PUT annotated with[PexExpectedGoals] and ignore the exploration of the monitoring method. Note that here you need to make a copy of the other version (in case your PUT setup requires to explore both versions so that you can use this version copy to do expected tool-behavior checking while instructing Pex not to explore this version copy.
public int MethodUnderTestVer1Copy(CUT c)
{
...
x = x * y;
...
return x;
}
public void CheckDiffForMethodUnderTest(CUT c, int ret)
{
if (MethodUnderTestVer1Copy(x) != ret)
PexGoal.Reached();
}
public CUT CloneCUT(CUT c)
{
...
}
[PexMethod]
[PexExpectedGoals]
public void PUTForYourCodeUnderTest(CUT input)
{
//if Option 1, you don't need to have the following 2 statements
PexSymbolicValue.IgnoreComputation(() =>
CheckInfectionFor1stChangeMethodUnderTest(int, int));
PexSymbolicValue.IgnoreComputation(() =>
MethodUnderTestVer1Copy(int));
PexSymbolicValue.IgnoreComputation(() =>
CloneCUT(CUT);
CUT cachedInput = CloneCUT(input);
//above caching isn't needed for primitive argument types
int ret = MethodUnderTestVer2(input);
CheckDiffForMethodUnderTest(cachedInput, ret);
}
More information:
==========
Below is from Parameterized Test Patterns For Effective Testing with Pex - draft (PDF)
2.11 Reachability
Sometimes, test assumptions can be so complicated that it is not clear whether there is any test input that fulfills them. You should make sure that there is at least one.
Pattern. Indicate which portions of a PUT should be reachable.
A PUT fails when Pex does not find a way to reach a goal, indicated by calling the PexGoal.Reached method or raising the PexGoalException in a PUT annotated with the PexExpectedGoalsAttribute.
[PexMethod]
[PexExpectedGoals]
public void InvariantAfterParsing(string input)
{
// complicated parsing code
ComplexDataStructure x;
bool success = ComplexDataStructure.TryParse(input, out x);
// we are only interested in successful cases
PexAssume.IsTrue(success);
// the invariant should hold
x.AssertInvariant();
// and we want to see at least one case
// where everything passes, and we get here
PexGoal.Reached();
}
Multiple goals can be combined in a single PUT for more advanced scenarios. A list of goal identifiers in the constructor of the PexExpectedGoalsAttribute. Each goal identifier must be reached and notified in order for the PUT to succeed.
[PexMethod]
[PexExpectedGoals("failed;passed")]
public void InvariantAfterParsingMultiple(string input)
{
// complicated parsing code
ComplexDataStructure x;
bool success = ComplexDataStructure.TryParse(input, out x);
if (!success)
// we want to make sure the failing case
// is covered at least one
throw new PexGoalException("failed");
// the invariant should hold
x.AssertInvariant();
// and we want to see at least one case
// where everything passes, and we get here
PexGoal.Reached("passed");
}