Machine Learning (ML) based Artificial Intelligences (AIs) have become increasingly prevalent in nearly all areas of life. They now know how to control our thermostats, drive our cars, produce our electronics, recognize our faces, evaluate our resumes, predict our purchases, and so much more. But what happens when such AIs fail, and how can we learn from these failures to prevent such incidents in the future? We provide a database for AI incidents which not only fully describes all available data about the incident, but also postulates about the root causes of the incident so that programmers can learn from them and in doing so prevent such incidents from occurring in the future.
Artificial Intelligence: AI is a broad term which encompasses all computer programs which seek to emulate sentient behavior.
Machine Learning: ML AIs perform this task by learning from training data, and based on the connections the AI forms while training, they are able to analyze similar data to form correct conclusions for real world data in the future.
Non-Machine Learning: Non-ML AIs perform this task by having a strict, predetermined set of rules they follow. In this sense, they are no different from a traditional computer programs with the exception that they are trying to emulate human behavior, like playing chess, rather than simply drawing the board.
In order for our database to answer both "how did this AI incident occur" and "why did this AI incident occur?", we have broken down the Causes portion of our taxonomy into Causes of Incident and Sources of Weakness.
Causes of Incident: Causes of Incident (CIs) answer the question "how did this AI incident occur." They are not overly technical, instead explaining the most direct cause for the incident itself.
Sources of Weakness: In contrast, Sources of Weakness (SWs) answer the question "why did this AI incident occur." They report, or postulate when required, about the base weaknesses in the AI's algorithm and how they came about.