False positive vs. negative

We use statistics to look at the probability of data. There are two kinds of errors that can be made:

False positive — The test shows that a site is impacted when it is clean. Also known as a Type I error. Note, α = the false positive rate. See http://bit.ly/type-one-error for more information.

False negative — The test shows that a site is clean when it is impacted. Also known as a Type II error. Note; ϐ = the false negative rate.

If we decrease the false positives, we increase the false negatives. The Unified Guidance recommends an annual site‑wide false positive rate (SWFPR) of 10%. Therefore, once a year, there is a 10% chance you will show a release when there has not been one. It is very important to keep this in mind. Often programs are set with a 5% false positive rate for each test. If this is done on a project that has five parameters in five wells and they are sampled quarterly (100 tests a year), there is over a 99% chance of getting at least one false positive per year and more likely five or six per year.

Considering the fact that we are going to spread out a 10% SWFPR between all the statistical tests, if we reduce the number of statistical tests, we can increase the overall power of the statistical program. For example, a program that requires sampling 20 parameters, but 17 of them have historically been non-detect, we can remove those 17 parameters from the formal statistics plan and use the Double Quantification. We can then do formal statistics on the remaining 3 parameters; thereby, increasing the power of the program.