Janine:
Thanks again for sending me your poster from the NIST forensics meeting.
It occurs to me that you are someone who might know the answer to a question I've long had.
I'm interested in computational integrity. Even assuming that we have carefully guarded, good, reliable data, we could mess everything up by doing the calculations on them wrong. This would include computational and mathematical mistakes, and would include stuff like floating point errors, human blunders, perhaps extending to the use of unwarranted assumptions or inappropriate models employed in calculations.
My big question is whether there is any evidence that the advent of computers has changed the rate of computational errors. I think people usually assume that computers have improved calculations, but there are ergonomic studies of spreadsheets that suggest they are quite error prone. If people are doing more calculations on computers rather than by hand, there could be actually more errors per computation.
Do you happen to know of any work in this area that might address my question?
Best regards,
Scott
Scott,
Your hypothesis is quite interesting. My assessments of laboratory work have ranged from highly automated systems where a LIMS controlled workflow and instrumentation and collected and processed data, to largely manual systems, where data were manually recorded and processed. In my experience, there is an alarming tendency to overlook the potential for data errors when IT systems are involved. Few labs have implemented effective software quality systems, even though they have in-house data processing systems. For decades, conventional wisdom among lab managers has recognized a manual entry/transcription rate of error of 3-5%. Good labs have put in place controls and measures in an attempt to catch these errors.
During audits, I've found corrupted spreadsheets in use, and data systems that were reporting results associated with the wrong samples. Ironically, when these issues were brought to the attention of forensic labs, their position tends to be that these aren't "real" errors, since they got the right measurement result, but they just "calculated it wrong" or "reported it wrong". Your point that computational systems allow labs to make more errors per computation are right on point. In Arizona, their use of an automated system meant that it took many months for a lab to realize that they had been using the wrong calibrator values to compute every batch of alcohol results.
When labs have data problems that result in outlandish results, it is generally easier for them to recognize the problem. The insidious and difficult to recognize problems are those for which improper results are within the realm of reasonableness.
I'm sorry I don't have empirical data that would be directly relevant to your inquiry, but I think it is a fascinating and important topic. Please let me know if I can be of any help. I'd love to read anything you publish on the subject!
Best regards,
Janine
Janine Arvizu
Certified Quality Auditor
161 Kuhn Dr.
Tijeras, NM 87059
7:51 PM (18 hours ago)
to me