Development of Autonomous Systems based on AI:
How to be ready to overcome the Ethical issues
G. DE LA ROCHE Jan 2018
G. DE LA ROCHE Jan 2018
Algorithms
First, we need to define what is an algorithm. An algorithm takes some data as input and execute a list of steps in order to compute a result. In the computer world algorithms are everywhere and they are the basis of all software. An algorithm is implemented in a computer using a programming language (for exemple C++). With the development of more and more powerful computers and programming languages, algorithms are more and more powerful and can be executed in shorter and shorter times with more and more data as inputs. So the tasks executed by algorithms are more and more complex.
Artificial Intelligence
The words Artificial Intelligence (AI) were used for the first time by Alan Turing in 1950. The idea is with the help of algorithms to make the machine more and more “intelligent”, i.e. to make the machine as much as possible as a human would do. However due to the amount of data to be threated, computers were not powerful enough at that time. The real development of Artificial intelligence happened at the end of the 1990s and it is still in its growing phase. One example is the Alpha Go project from Google (2016) where the Go game world champion was beaten by a machine for the first time.
Machine learning
One of the main techniques used by AI is called machine learning. Machine learning is a technique where the computer can “learn” by itself by analyzing the amount of data it has as inputs. There is a kind of training of the algorithm to be performed by the programmer. Once the algorithm is well trained (i.e, it is well tuned) then is is able to recognize some patterns. Let us take the example of image recognition which is widely used in AI. The programmer gives a big number of pictures to the algorithms, some with human faces inside, some without faces. During the training the pictures are tagged, so that it is said to the algorithm “this one is a face” or “this one is not a face”. Then analyzing this data, the computer can find some patterns and some characteristics inside the sets of images. Then, when a new image comes as an input of the algorithm, the computer will be able to analyze the characteristics of this image and to find if it fits better to the “this one is a face” or “this one is not a face” properties. Of course, the result is never 100% accurate by machine learning techniques are more and more efficient because they are able to analyze more and more the data. This is for example the principle of face recognition in Facebook.
Big data
The more data is used to train the machine learning, the more accurate it will be. Therefore, AI requires a huge amount of data as inputs. This data usually requires servers or clouds to be stored. The science which consist in collecting, storing and analyzing this data is called “big data” which is a recent field of computer science. Big Data is an important part of AI, and now most of the companies have their own data scientist. At Renault for example in Sophia Antipolis 11 Data Scientist were hired during the last year.
Deep learning
The most recent machine learning techniques which use big data are called deep learning. To make it simple let us consider that deep learning is an improvement of machine learning using much more data (so much more accurate).
Summary
Artificial Intelligence is in general the manner how a computer takes decisions, using big data as inputs and deep learning decision making algorithms.
Machines being able to take more and more “intelligent” decisions are more and more a reality and since the 2000s AI is more and more commonly used in many domains. Nowadays the major players in AI are the global companies like Amazon, Goggle, etc, but also many industries (such are car industry) and a huge number of startups.
Artificial Intelligence is growing faster and faster. Since 2010, it has grown at a compounded annual growth rate of almost 60%. (Jacobsen, 2018). According to (Jacobsen, 2018), based on the number of research papers, the top five countries leading innovation in AI are: China, USA, Japan, UK and Germany.
Moreover, it is to be noticed that many citizens are afraid by AI which looks like science fiction. Many science fiction movies and novels discuss this topic and the consequences of machine taking their revenge on the humans. That is why in many countries governments start to seriously discuss about AI so that it will become an opportunity for the country and not a threat.
AI leads to many challenges from an ethical point of view. Knowing these challenges is the key to ensure that AI developers focus their work in the best direction.
Challenges with economy
As explained in (.F. Vottero, 2016), AI is really a major area for the future economy. The market and the investments are huge. The tendency for investments in AI is increasing and is among the most important for all high-tech fields. AI will be a key in the future to improve productivity, to improve customer satisfaction, or to develop new services and products.
However, let us not forget that the impact that AI will have on economy can have important negative ethical implications. Indeed, the tendency to replace many jobs by robots, will increase with AI, leading to the disappearance of many jobs. So traditional economy and traditional jobs will be strongly impacted and more and more replaced by robots. This is an ethical question which should not be forgotten: if many people will benefit from AI, some others (like less educated people or older people) will lose jobs and may miss opportunities and may not adapt to this changing economy.
Challenges with private life protection
AI requires to use and store huge amounts of data. The collection and the protection of all these data is challenging. There are many examples these days with hackers able to access very confidential data. Whatever new encryption technologies or most advanced cyber security techniques are used, hackers will always be able to find ways to access the data. In fact, security of data can really be considered as a big challenge because there will never be a 100% sure protection of these data. This is a major problem and has strong ethical implications. For example, what will happen when hackers will access private health data or will propose to sell confidential data from companies to their competitors?
Please note that in this report we are more interested by the algorithms, than the security of the data (because we could do a full report only on cybersecurity topics, and also because we believe that the topic is more covered, not only with AI but in all computer science fields using big data).
Challenges with autonomous systems
With AI, the notion of autonomous agent is important. The autonomous agent, that can be a machine, a robot, a software, a car, etc. will decide and make decisions. The autonomy that can be reached by an autonomous agent depends on the complexity of the environment and the tasks. The complexity of the environment depends on the amount of data to analyze, whereas the complexity of the tasks depends on the decision process and the algorithms.
In most of the AI systems, full autonomy is not yet possible, so human has to remain in the loop. In other words, it means that the autonomous agent takes decisions with the help of a human. For example, with drone weapons used by the army, the human has to control some tasks like take off, but some other tasks like follow the target are done autonomously.
Hence, with autonomous systems (which includes autonomous vehicles), the machine and the human will always share the decision process. One important challenge is to decide how to share the decision process, considering that machines and humans have different characteristics.
Both the machine and the human have strengths and weaknesses. For example, the machine can do quick actions (it processes the data quicker compared to a human brain) but it has no emotions at all. Also, when a human is stressed it can make big mistakes, which is not the case for a machine which will always act the same way.
Having autonomous systems that share the decision process with human is called the human machine interaction and implies ethical issues. As explained in (Chatila, 2017), the most common ethical implications are:
- Communication problems
- Automation bias due to excessive confidence in the machine
- Surprises due to ignorance of the machine state
- Robot or human responsibility.
Challenges with ethics theory
It is very difficult to decide which ethics theory a system based on AI should use. For example, we will see in next chapter the problem of the trolley which illustrate how difficult the choice to make can be. Different ethics theories have been developed in the past (Chatila, 2017), including the following
- Deontic ethics: this theory from Kant (1786) is based on the fact that a moral decision must be taken whatever the situation. Therefore, all decisions must be based on moral facts such as “do not kill” or “always say the truth”.
- Consequentialism: this theory from Jeremy Bentham (1789) and John Stuart Mill (1861) is based on the fact that decisions should be taken depending on the moral consequences. For example, “kill one person instead of five persons”.
- Situation ethics: this theory also called casuistic, is based on pragmatism. All decisions depend on the given situation. The decisions must adapt to the context.
- Rawlesian theory: this theory is based on justice and law. In this case the system has does not care about the consequences, the priority is to follow the rules.
As seen in this short list, deciding which kind of ethics should be used by AI is very difficult, it has implications with philosophy, personal opinions, religion, etc.
Challenges with responsibility
One other challenge with AI systems, is to decide, when a problem will occur, who will be responsible for the problem. The list of responsible persons could be for example:
- The researchers, i.e., the R&D people who could be responsible for developing this idea with this way of making decisions.
- The manufacturer, i.e, the person who made the final product and who could make mistakes or errors in the decision process.
- The seller, who puts on a market a product with ethical issues, which will be used by huge number of people.
- The final user, who may use incorrectly the product.
- The product itself since a robot is sometimes considered as “intelligent” it could be considered as responsible since it is the one who made the decision.
In fact, it is much more complicated. Of course, a machine only cannot be considered as the full responsible. And of course, the seller of a product cannot be considered as the only responsible person. In practice the responsibility is shared among all the players, and it is very complex to know how this responsibility is shared. It is also very challenging for the different players to do all they can do to ensure they will not be the main responsible person (which is usually what the different players want to ensure).
Challenges with social relations
Using robots who behaves more and more like humans can have a strong negative impact on the society. For example, in Japan a lot of efforts are made to develop robots who behave but also who look like humans. Making robots as much as possible as humans is called bio-mimetic. Robot will become like perfect humans, and humans will prefer to interact with their robots instead of interacting with other humans. This can have very strong ethical implications because the frontier between human and machine could disappear.
The CERNA (Devillers, 2017) has given alerts regarding this issue. Indeed, AI could affect human relationships since humans may prefer to stay, communicate, interact, and why not marry with their robots. Robots can also have strong implications on the development of kids and young persons too much in relations with robots.
In (Yang, 2018), the dangers of robotics are explained, one of them being to overuse AI. This article suggests that we should never forget that there will always be a difference between humans and robots. Using too much AI can lead to ununderstood fight between “over-intelligent” machine, and “under-intelligent” humans.
Challenges with human integrity
AI can be used not only to replace human but also to enhance the human. There is a lot of R&D these days about augmented humans, where robotics parts are added to the human body to make it stronger of more efficient. Of course, augmented human AI can be very positive for example for people with a handicap to facilitate their life. But modifying a normal body to improve it can also be seen as very negative since we are playing with human integrity. This goes back to the question what it a human and do we have the right to modify it, which of course is a major ethical problem. Please note that if augmented reality seems to be science fiction it is not the case at all. For example, in the army there are already a lot of working products with augmented parts to be added to the soldiers to make them stronger. In fact, the investments in R&D for AI for the army are huge. For example, there is a lot of research about AI to be used in automatic weapons: this kind of weapons can decide by itself the targets to reach. In the Villani report there is a full section about the important concerns related to automatic weapons.
.F. Vottero, L. G. (2016). Le marché de l’intelligence artificielle. Perspectives de croissance, forces en présence et opportunités de l’ecosystème francais de l’IA d’ici 2020. Rapport Xerfi-DGT, Dec 2016.
A. Shariff, J.-F. B. (2017). Psychological roadblocks to the adoption of self-driving vehicles. Nature, Issue on Nature Human behavior, Vol 1, Oct 2017, 694–696.
Anderson, M. A. (2011). Machine Ethics. Cambridge University.
C. Villani, M. S.-C. (2018). Donner un sens à l'intelligence artificielle. Rapport Villani, Mars 2018 Rapport de mission parlemantaire, 235.
Chatila, R. (2017). Questionnements éthiques sur la robotique et l’intelligence artificielle. Journée « Philosophie des sciences et intelligence artificielle ».
Devillers, L. (2017). Éthique de la recherche en apprentissage machine. Paris: Allistene.
Dick., P. K. (1963). The Game-Players of Titan. Ace Books.
Fleetwood, J. (2017). Public Health, Ethics, and Autonomous Vehicles. Am J Public Health, vol 107, 532–537.
Foot, P. (1967). The Problem of Abortion and the Doctrine of the Double Effect in Virtues and Vices. Oxford Review, Number 5, 1967.
Goodal, N. (2014). Ethical Decision Making During Automated Vehicle Crashes. Transportation Research Record: Journal of the Transportation Research Board of the National Academies, Washington D.C, No. 2424, 58-65.
Goodall, N. J. (2016). Can You Program Ethics Into a Self-Driving Car? IEEE Spectrum, Issue on Transportation.
Goodall, N. J. (2017). From Trolleys to Risk: Models for Ethical Autonomous Driving. American Journal of Public Health, 107(4), 496.
Isaac Asimov"Runaround". I, R. (. (1950). "Runaround". I, Robot (hardcover). New York City: The Isaac Asimov Collection ed..
Jacobsen, B. (2018). 5 Countries Leading the Way in AI. Futures platform, 4.
Lavergnée, T. d. (2015). La voiture intelligente et connectée. Menace et opportunités pour les acteurs traditionnels de l’automobile. . Etude PRECEPTA, Feb 2015.
Lavergnée, T. d. (2017). Le véhicule autonome et connecté. Comment se préparer efficacement aux mutations stratégiques à venir ? . Etude PRECEPTA, Jul 2017.
Marshall, A. (2018). The Lose-Lose ethics of testing self-driving cars in public. Wired.com, Transportation, March 2018.
Ogien, R. (2011). L'influence de l'odeur des croissants chauds sur la bonté humaine. Grasset.
Perrin, J. (2017). Pour une Politique Ethique du Véhicule Autonome. World Forum for a Responsible Economy. Lille.
Prakkenn, H. (2017). On Making Autonomous Vehicles Respect Traffic Law: a Case Study for Dutch Law. ICAIL ’17. London, United Kingdom.
Yang, G.-Z. (2018). The grand challenges of Science Robotics. Science Robotics journal, Vol 3, Jan 2018, 14.