Most software testing involves a basic pattern: provide an input that should generate a known output and then verify that the code generates that output. But how do we choose good input values?
One option is to test all possible program inputs, and sometimes this is a reasonable option. But in most cases there are simply too many possibilities for this to be feasible. A good rule of thumb is to test what are known as "boundary values". These are values that occur at the boundary between different groups of input values. The exact definition of these groups depends on the algorithm under test. For example, if your program accepts an integer as input, a reasonable way to split up the input into groups is:
This is because these groups tend to behave differently when various numerical calculations are applied. For example, you can't divide by zero. Off-by-one errors are also relevant here since they essentially mean that an algorithm "crosses" groups when it shouldn't. A good rule of thumb here is to test 1, 0, and -1 (boundary values) as well as MAX_INT and MIN_INT (which are boundary values because integers "roll over"). That being said, it is necessary to think about the problem your code solves in order to choose groups that will result in robust tests.
Further reading: