Computer Reliability
(Chapter 8)
(Chapter 8)
Computer reliability is the reason why there was even a problem in Fail Safe, as there was a computer glitch that caused a chain of procedural failures which ultimately ended in a nuclear conflict and millions of people dead.
State of Computer Reliability in 1964
Computer hardware and software were still relatively new at the time of the film in 1964. The transistor was invented less than 20 years before the release of the film, and was not in widespread use until less than 10 years before the release [1]. Additionally, it was not until after the film came out that the concept of a systematic and disciplined approach to creating software was popularized [2]. With this in mind, from both a hardware and software standpoint, computers at the time were considerably less reliable than today.
Because of issues with cost, reliability, and being a relatively new invention, most of the public was not familiar with computers. This led to a heightened sense of fear in the use of computers in high risk scenarios from the general public. The idea that humanity could be in the hands of these mysterious and unreliable machines was very potent at the time, and still is today.
Loss of Control
Although modern computers are much more reliable and powerful than they were 60 years ago, the loss of human control is still a scary thought. One of the most important parts of the film is the realization that no matter what they try, they are unable to recall the bombers. A technical glitch set in motion a series of events that will lead to the untimely demise of millions of people, and they can do nothing but sit back and watch it unfold. As automation becomes more prevalent, the necessity that the computing systems are reliable and robust against these types of errors is more important than ever
Humans must design computer systems that not only are performant and effective, but also can be trusted and are administered only under human control.
Ethical Analysis of Robust Computing System Design
An important aspect of Kantianism is that people cannot be treated as a means to an end. They have their own value and worth that needs to be respected.
One could argue that the act of producing unreliable software and computing systems is using these people as a means to an end, that end being possibly profit or convenience.
If the rule of "it is okay to build unreliable computing systems" were to be universalized, this could be very dangerous.
It would create an environment where it is difficult to trust any computing system.
Millions of computing systems are used around the world daily to make people live more comfortably, and do things they need to do.
If these systems failed, it could cause widespread suffering and harm.
Machines being used to keep people alive could fail, such as life support or dialysis.
Transportation could break down, causing people to lose their jobs or not be able to access healthcare when needed.
Since failure could cause so much harm, in the Act Utilitarian view, it is imperative that these systems be designed to minimize total failure, or be robust against failure.
As you can see, in both schools of thought, software developers ought to make reliable software.
References
[1] Ross, I. M. (1998). The invention of the transistor. Proceedings of the IEEE, 86(1), 7–28. https://doi.org/10.1109/5.658752
[2] Galler, B. A. (1969). ACM president’s letter: NATO and software engineering? Communications of the ACM, 12(6), 301. https://doi.org/10.1145/363011.363013