Ethical Implications of Machine Decisions
During a discussion session during my MBA program with fellow students, I presented an example of an AI ethics dilemma. Imagine the scenario where you take a self-driving car from work to home. Everything is going smoothly until a young boy riding his bike suddenly appears in front of the self-driving car.
There are only two immediate actions that the self-driving car can take, each yielding a completely different outcome:
Abruptly swerve to the left and impact a wall, killing you, the passenger, or:
Break hard, but still hit the bicycle, killing the young boy.
To my surprise, the most common answer given to me was neither 1) nor 2).
The most common answers (or should I say, comments) were:
"That is a coding problem";
"The code has to consider that situation so nobody is hurt";
"That would be bad programming".
Accidents happen and machines are not exempt either. The above fictional situation is the conundrum of AI ethics. How would society react to a decision made by a 'machine'? Who gets to choose who lives and who dies? The machine is instructed to perform in a certain way, but somewhere, a decision is made, either by an algorithm or a defined set of rules. When a human driver is involved in such an event, he/she may be morally responsible to a certain extent. But what will happen when a machine is the vehicle's driver? Will the engineers who designed the algorithm be held accountable for prioritizing saving the life of the self-driving car's passenger over the boy's life?
Providing answers like the ones given by my fellow students is not good for the development of AI. These are scenarios that should be discussed and debated by all stakeholders and that have a profound impact on how we design ML/AI algorithms.