Home Page‎ > ‎

Random Thoughts

 

Absence of Evidence is Not Evidence of Absence

posted Feb 27, 2012, 7:43 AM by Donald Firesmith   [ updated Feb 27, 2012, 7:50 AM ]

One often hears the statement "Absence of evidence is not evidence of absence", especially with regard to religious arguments as to whether or not there are one or more gods.  I beg to differ. An absence of evidence in favor of a proposition, especially in the face of looking for that evidence is evidence that the proposition is false. For example, if particle physicists have a theory that predicts the existence of a new particle with some specific set of characteristics (e.g., mass, charge, spin) and they perform careful experiments with particle accelerators and sensors designed to detect the new particle and the particle is not found, then the evidence mounts that the particle does not exist and that the theory is wrong.  Although it doesn't sound as nice as the original, the correct statement should be "Absence of evidence is not proof of absence."  At its foundation, the universe is largely statistical so that increasing absence of evidence implies a decreasing probability of truth, but no finite amount of negative evidence will prove (in the mathematical sense) that the probability is exactly zero, only that it approaches zero. 

If our senses extended by our technologies provides absolutely no evidence that something exists and there is no solid theory based on such evidence as to why that thing must both exist and yet not have been detected, then one is on quite safe ground in stating that the something does not exist. Their may be an infinitesimal probability that one is wrong, but I am not going to lose any sleep over being wrong under such circumstances.

By the way, this is the same fuzzy thinking that characterizes the statement "You can't prove a negative." That statement is clearly false in mathematics (e.g., it is easy to prove that 0 does not equal 1) and is only true in reality if you demand absolute certainty to count as proof. Of course, no one asks for absolute certainty in anything else, so "You can't prove a negative" is only true in a useless sense of the word. In fact, specifying and verifying negatives is a large part of what safety and security engineering is all about. Nothing is absolutely 100% safe or secure, but luckily in the real world nothing has to be.

Interesting Limitation on Monarchy

posted Dec 8, 2011, 10:06 AM by Donald Firesmith   [ updated Feb 27, 2012, 7:42 AM ]

I just learned that the King and Queen of Sweden (yes, the UK is not the only monarchy left in Europe) are prohibited by law from either voting or making any political statements. I find it amazing how far the power of monarchs has fallen since the days of divine rule. Yet as an American who strongly believes in democracy ("the worst form of government except all of the others") and detests the idea of monarchs, dictators, and assorted despots holding power and passing it on to their children, I find I am conflicted with regard to this Swedish law. The King and Queen are figurehead leaders and people can certainly have such figure heads if they want, but shouldn't every one have the right to vote and free speech?

Fighting the Previous Safety War

posted Dec 8, 2011, 9:43 AM by Donald Firesmith

Government safety regulations are typically based on the results of previous major accidents. These regulations often mandate specific architectural decisions such as requiring nuclear power plants to have a certain number of hours of battery backup power and specific numbers of Diesel generators to supply additional power to keep the reactor core and spent fuel pools from overheating if main power was lost. So far, so good.

Unfortunately, such regulations did not help prevent the Fukushima meltdowns caused by the recent earthquake and resulting tsunami. Battery power ran out, Diesel generators were flooded out, emergency external power supplies via trucks were destroyed in the ensuing hydrogen explosions, and the resulting radiation prevented other attempts to supply power in time to save the reactors from meltdown. Thus, basing future safety decisions (and regulations) on past history is doomed to failure because it is based on a false assumption: that the future will be like the past. However, recorded history is too short to record very rare events, and things like global warming are creating a future that will be far different than the past. For example, a recent flood gave us pictures of a nuclear power plant surrounded by sand bags, looking like a tiny island in a big sea.  The power plant was not water tight, and low-level electrical control boxes would probably have been within reach of the flood water if the sand bags had not held the water back.  Hundred year floods are now happening every few years, and their magnitude will only increase as heat increases evaporation, which increases subsequent precipitation and major storms.

It is not sufficient that companies merely follow the governmental regulations. A risk based approach is needed in addition to mere compliance with regulations. Otherwise, we will continue to fight the previous safety wars and lose the coming ones.

Risks with Catastrophic Harm but Negligible Probability

posted Dec 8, 2011, 9:10 AM by Donald Firesmith   [ updated Dec 8, 2011, 10:07 AM ]

The amount of risk is calculated as the product of the amount of harm times the probability that the harm occurs. How do you deal with the risk associated with a system when the potential harm is very high level of harm but the probability is very low? This is not an easy problem to solve for several reasons. People have a very hard time understanding and estimating very low probabilities and are often off by one or more orders of magnitude. For example, can you easily wrap your mind around the following probabilities and the difference between them: 0.000001 and 0.00000001? However, if your estimation of risk has an uncertainty of one or two orders of magnitude, then your estimation of risk level will also be off by the same number of orders of magnitude, making your risk calculation relatively useless. Harm probabilities are difficult to accurately and precisely estimate when large amounts of software is involved. And finally, very low probabilities are very difficult and expensive to verify.

So what is one to do? One can assume that if something bad can happen, it will and therefore avoid the risk (e.g., by not perform the risky action such as building the system that can cause catastrophic damage no matter how "safe" it supposedly is). Or if the system is to be built in spite of the risk, one can incorporate all of the safeguards one can afford. In this case, the question would not be "How safe is safe enough?" but rather "How much can we make the system safer?" Either way, such high harm low probability risks are currently very difficult to deal with. A better approach is needed. 

1-4 of 4