Automation and Globalization
(Chapter 10)
(Chapter 10)
Automation is a major underlying theme in the film. The relegation of certain high risk tasks to computers is what ultimately allowed a small computer glitch to have devastating consequences.
Automation in 1964 vs. Today
Automation in 1964 looks a lot different than it does today, due largely in part to the rise of machine learning and artifical intelligence. At this time, automation was all rules-based [1]. Machines could only complete tasks which they had been specifically designed to complete, as they followed strict pre-programmed rules and directions. Alternatively, in the modern day given the rise of machine learning, machines are now capable of learning to complete tasks which they were not specifically designed to complete, thus making them much better at automating tasks. A machine learning model can be shown many thousands of labelled examples of a task and over time learn how to complete that task with high accuracy.
In the film, computers once again could only complete tasks which they had been designed to complete, but even then their use in the military and industry was exploding rapidly [2]. Rather than having men on the ground tracking air traffic, radar could be used to monitor flying objects automatically at long distances.
Human Oversight in Automation
Even in 1964, there were major fears that the rise of computers and automation would be disastrous for the general human population [3]. Once again, at the core of these fears is a loss of human control and oversight. The film Fail Safe explores these concerns by demonstrating a catastrophic failure of oversight, where an error in a computer system effectively locks its human creators out once it is too late to recall the planes. Today, as we develop increasingly complex systems like autonomous weapons, AI-driven financial trading, and self-driving vehicles, we face the exact same dilemma presented in Fail Safe. How can we preserve human oversight in systems designed to complete tasks better than a human can?
Military Automation
Military automation is becoming an issue in modern military, as drone warfare is becoming bigger, and artificial intelligence is beginning to be integrated into military systems across the world. This brings upon a rather scary thought... which is artificial intelligence making choices within the military that would ultimately lead to decisions being made strictly by machines. One of the theories that has come to light is the "doomsday system". This theorized military plan has to do with an automatic nuclear retaliation from a country that is attacked, ensuring mutual destruction. Fail Safe shows exactly how risky this is, as there can be accidental issues, technical failures, and other problems that come up with automated systems, which could lead to automated nuclear launches. While in the film, there was only one bomber that successfully dropped explosives, in our modern world, intercontinental nuclear warheads are ready to launch from dozens of points around the world, creating a much more devestating nuclear conflict.
Ethical Analysis of Military Automation
Immanuel Kant emphasizes that in order to make moral decisions, the agent making the decision must have "rational agency." [4]
This means that the agent must act intentionally, based on some specific reasoning.
Computer scientists may argue that machines do have rational agency, but a philosopher likely would argue they do not.
They do not have "full rationality" [5]
In Act Utilitarianism, calculating the utility of any given scenario is very very complex.
Can machines really weigh all the variables needed to make informed utilitarian judgement in real time, especially in complicated settings such as war and combat?
Likely no.
Therefore once again it is important to have humans in the loop, as they can make these utilitarian judgements.
In both schools of though, having human oversight to automation is neccesary.
References
[1] Buchanan, B. G. (2005). A (Very) Brief History of Artificial Intelligence. AI Magazine, 26(4), 53–53. https://doi.org/10.1609/aimag.v26i4.1848
[2] Garner, R., & Dill, R. (2010). The Legendary IBM 1401 Data Processing System. IEEE Solid-State Circuits Magazine, 2(1), 28–39. https://doi.org/10.1109/MSSC.2009.935295
[3] Bassett, C., & Roberts, B. (2023). Chapter 7: Automation anxiety: a critical history - the apparently odd recurrence of debates about computation, AI and labour. https://www.elgaronline.com/edcollchap/book/9781803928562/book-part-9781803928562-12.xml
[4] Johnson, R., & Cureton, A. (2025). Kant’s Moral Philosophy. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Winter 2025). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2025/entries/kant-moral/
[5] Marwala, T. (2018). The limit of artificial intelligence: Can machines be rational? (No. arXiv:1812.06510). arXiv. https://doi.org/10.48550/arXiv.1812.06510