2018 Scuedule
Lecture Report
Arisa Ema (Part-time lecturer, Graduate School of Arts and Sciences / Specially Appointed Lecturer, National Institute for Policy Studies)
Yasuo Kuniyoshi (Professor, Graduate School of Information Science and Technology / Director, Research Center for Next Generation Intelligence)
Tadashi Sakura (Professor, Interfaculty Initiative in Information Studies)
Hideaki Shiroyama (Professor, Graduate School of Public Policy)
The first lecture was a roundtable discussion that included all the teachers from the course. First, Prof. Ema explained that the purpose of the class was to hold discussions between people from different fields in order to answer the question “what kind of society do we want and what can technology and people do to achieve it?” (rather than taking a more passive approach by asking “how will technology change our society?’’). To illustrate the necessity of having various perspectives and to encourage discussions in different fields, he talked about an unfortunate incident that occurred overseas, when a tourist who used a map application to search for the shortest route to his destination passed through an unsafe area that locals knew about inherently (see more details here). This shows that even familiar technology can create unexpected problems in some contexts, which cannot be solved by engineers alone.
In response to this, the first discussion centered around engineers’ responsibility in terms of introducing measures to protect against foreseeable accidents. However, as not all events are foreseeable, we need to also think more broadly about the ideal applications of technology within society. The “ideal” is something that should be thought of by the general public and realized through relationships with experts. Yet, there are various kinds of people in the general public. For example, the behavioral patterns of people in the information field are completely different to those in the medical field, and even in the information field, data owners and analysts often differ significantly. What, then, will happen when different actors collide and spread out on a large scale?
Currently, AI technology is being introduced into society. Dr. Kuniyoshi pointed out that this is the most difficult phase both technologically and socially, as AI is “half-baked” (i.e. it can only be used under limited conditions) and there is not enough discussion around it. Dr. Shiroyama responded to this by talking about how to deal with “half-bakedness” in the midst of excessive expectations for technology. Black boxing is a hot topic in artificial intelligence, but the same is true when asking people to do things. In response to Dr. Kuniyoshi’s question about whether AI can be successfully entrusted with tasks, Dr. Sakura referred to the design of a society in which roles are divided in order to entrust AI with what it is good at right now. Dr. Shiroyama pointed out that it may be possible to leave certain patterned decisions to AI, although there are many factors at work in human judgment. However, can we trust these decisions? Knowing the mechanisms of decision-making and trusting AI are different issues. We need to first consider the kinds of technology that we want to handle with care.
At the end of the session, the professors gave a message to the students to broaden their horizons through discussions with people from other fields, and the students asked questions about the relationship between employment and artificial intelligence, as well as professors’ definition of AI.
(Responsibility: Teaching Assistant Haruka Maeda)
The second class was taught by Professor Toriumi, who talked about strong and weak AI. The lecture was divided into three main parts: 1) what is strong AI and weak AI; 2) an overview of the history of artificial intelligence; and 3) challenges related to establishing strong AI. Regarding (1), strong AI refers to artificial intelligence that has “consciousness” and can think like a human being, albeit at a level where humans cannot be involved. Weak AI, on the other hand, is merely a replacement for human work.
The first artificial intelligence boom occurred in the 1960s, a few years after the term “artificial intelligence” was first used at Dartmouth Conference in 1956. The second boom in artificial intelligence occurred in the 1980s, when attempts were made to develop artificial intelligence based on the IF-Then rule; but, in the end, these attempts did not lead to practical applications. In the third boom that we are experiencing today, artificial intelligence is being created through big data, and the success of deep learning, which is one of the methods of machine learning, formed the basis of this boom.
In the case of A), it is difficult to create a robot that can reason as much as it needs to, while in the case of B), it is difficult to make AI recognize certain things as being certain things. For example, it is difficult to define the concept of an “apple” as being an “apple” (Summary: Mr. K.H. from the School of Public Policy).
These topics raise questions surrounding the coexistence of consciousness and strong AI. T.N. from the Graduate School of Interdisciplinary Information Studies stated that strong AI can be achieved by making it behave “like” it is conscious (since we cannot confirm whether it is conscious or not), but this requires a strong motivation beyond mere entertainment. In order to make such strong AI cooperate with humans, it is necessary to set up rewards and value criteria well.
In contrast, H.K. from the Graduate School of Frontier Sciences states that humans already have strong motivations for strong AI in the form of “wanting to know more about others,” “clarifying the cognitive functions that determine our sense of value,” and “replacing human brain labor.” Strong AI will inevitably handle vast amounts of information, and due to the difference in cognitive abilities, it is possible that it will think very differently from humans. As such, it is very difficult to configure the environment and value judgments for such AI.
When it comes to setting rewards and values, directly installing humans can erase the strengths of AI. To put it crudely, AI will simply replace bosses who patronize them. We need to therefore develop a system that incorporates the strengths of both humans and machines.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
The third lecture was given by Professor Sakura, who focused on (1) the interaction between technology and society; and (2) how society views robots. (1) When things like telephones, radios, and pagers were developed, they ended up being used differently from how they were initially intended. The lecture showed that the merits and demerits of new technologies are determined by how they are used, such as the fact that technology does not necessarily make people happy, and that replacing only a part of many work processes with machines may increase the overall workload and further widen social disparities.
(2) Despite the fact that people have been making things that resemble people in cultural history, people tend to feel threatened by robots that resemble people too closely, as seen in the “uncanny valley.” The three major events that transformed the view of human beings in the history of science (1. Copernicus’ geocentric theory; 2. Darwin’s Theory of Evolution; and 3. Freud’s psychoanalysis) fundamentally changed the anthropocentric worldview and the perception of human beings as being rational. Currently, a popular opinion is that we are experiencing a fourth turning point, whereby the boundary between humans and machines is disappearing (Summary by Mr. R.T., M1, School of Public Policy).
After the lecture, a group discussion was held on the theme of the coexistence of humans and AI. The coexistence of humans and machines is not a theme limited to artificial intelligence, as seen in the Luddite movement during the Industrial Revolution. What, then, is the difference between the fear of replacement by so-called “technology” and the fear of replacement by robots and AI?
A.K., an M1 student at the Graduate School of Public Policy, sees the current changes in AI as being similar to the fundamental changes in the way we view science, such as the historical Copernican turn. It is likely that the image that is currently being spread across the world is vague, in the same way that the geocentric theory was vaguely believed.
On the other hand, Mr. A.U., a D1 student in the Graduate School of Interdisciplinary Information Studies, stated that as technology becomes more complex, uncertainty increases, and appropriate risk communication is necessary. For example, this uncertainty is currently manifested in the black box of the AI. From this, one could argue that AI-related technologies are qualitatively different from previous technologies. Can we really understand what is in the black box? Who are the “we” in this context? How can we unravel a black box?
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
The fourth lecture was provided by Professor Kuniyoshi. The growth of AI technology has been remarkable in recent years, and it is now being used in various fields; however, AI technology does not go beyond the conventional logical framework of mapping input and output. Fundamental issues such as data bias, frame, and symbol crowding problems are yet to be solved.
Transcending the limitations of existing AI and acquiring higher-order cognitive functions is an urgent issue in AI research. One approach to this goal is embodiment and emergence. By imposing significant constraints on AI and generating an infinite number of patterns that satisfy these constraints, we can develop flexible behaviors that are otherwise unattainable using conventional AI. Another approach, based on developmental theory and neuroscience, is to give AI a “human mind.”
When AI with metacognition, common sense, and humanity (that is distinct from conventional AI) is realized, humans and society will either progress into a more ideal form or they will be shaken to their very foundations; in both scenarios, we will be forced to fundamentally rethink humans and society. For this purpose, cross-disciplinary dialogue and the redefinition of human nature are necessary to map out the future design of human society (summary by K.Y., M2, Graduate School of Public Policy).
In the discussion section, there was a debate about AI having a “human mind.” D.I., an M1 students from the School of Public Policy, said that the symbolic grounding problem and the frame problem can only be solved if AI behaves like a human. For humans and robots to have an equal and friendly relationship, it is necessary to give it a human mind. However, if the goal is to achieve a specific objective at the lowest possible cost, a human mind is not necessary, since a “lazy mind” may emerge.
T.O., an M1 student at the Graduate School of Public Policy, also wrote that the human mind is necessary because of the idea of suppressing abnormal behavior and aiming for harmony. However, this perspective forces us to examine what the human mind actually is. There are people with good hearts and people with bad hearts. When I commented on this, I was convinced that the “human mind” here means a “human” “mind” in the conceptual sense; in other words, something that is in line with the general common sense for controlling AI, something that is in line with how humans should be.
Virtue and ethics are situation-dependent and vary greatly from person to person, and conflicts continue to occur even among human beings. What we need is a basis for such dialogue and an attitude of compromise, which I (Teaching Assistant: Maeda) believe is what the “human heart” represents.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
In the fifth class, Prof. Shiroyama gave a lecture on the relationship between AI and social decision-making that covered two issues: the basics of technology assessment (TA) and literacy in terms of the politics of AI, and the impact of AI on social decision-making and how to deal with it.
First, he explained what TA is and how it can contribute to society. Next, he touched on issues such as the diversity and balance of risks and benefits, the occurrence and total amount of risk events, regional differences and time lags in recognition, and transitional and transitional issues. Finally, he discussed how TA can be institutionalized within Japan and its decision-making bodies. Regarding the second point, AI and politics, he pointed out that the issue of delegating decision making and communication is not necessarily a new issue, and that unexpected situations can occur when delegating to both humans and machines.
In the future, it will be necessary to examine the possibility of AI and humans sharing various roles in various aspects of social decision-making. At the same time, in the AI-human complex society, humans will be required to have the ability to criticize historical documents in order to deal with bias, as well as the ability to solve problems collaboratively (summary by Mr. U.S., M2, Graduate School of Public Policy).
In the comments from this class, the focus was not on how AI will be involved in politics, but on what it will base its value judgments and decisions on, and whether humans will be able to agree with these decisions.
Mr. S.T., an M2 student at the Graduate School of Public Policy, writes that it will be necessary to entrust politics to AI in light of the declining population. However, considering the fact that Japan is currently a democracy, one of the issues that should be considered is what voters think and whether it is possible to obtain a social consensus on these matters. We conclude that a phased introduction is necessary, considering the need to respond to public sentiment and simply seek rational and neutral decisions.
K.O. from the School of Public Policy touches on the issue of AI and value standards, and questions whether we can call it happiness for humans if AI starts to assign values. People’s values are certainly diverse, but how will AI value them? For example, will AI be able to acquire Plato’s ideals as a way to achieve an ideal form? Nevertheless, AI politics may be able to make fairer decisions than those currently being made by humans.
Politics is not merely about finding the best solution. Policy making and decision making involves a wide variety of actors and profit judgments, and is often more than just “optimization. ” To choose the best answers, human beings need to be sufficiently prepared.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
In the sixth session, Prof. Shishido gave a lecture on AI and legal issues. In law, when a social relationship satisfies multiple requirements, the effect is a change in the relationship between rights and obligations. At that time, the specifics of the requirements and the relationship between the multiple effects becomes the subject of discussion, which is what is known as interpretation. In interpreting the law, balance and proportionality of interests are required, and a general solution that is not ad hoc is desired. Based on this premise, the introduction of robots and AI into society needs to be considered.
First, the role of robots and AI that will be needed in the future of Japanese society will focus on replacing the dwindling number of humans due to the declining birthrate and aging population. At the same time, they will be required to augment human capabilities or create new values. A typical example of this is automated driving. Autonomy is a characteristic of robots and AI, which has not been observed in previous technologies. This is a trade-off between the benefits of dramatically increasing the quantity and quality of production and the risks of uncertainty and unpredictability, so laws related to artificial intelligence must strike a balance between maximizing benefits and minimizing risks. At the individual level, it is a question of how to gain an understanding of AI; at the corporate level, it is a question of how to integrate humans and AI into productive activities; and at the government level, it is a question of resource allocation and regulation (summary by M1, School of Public Policy, and K.N., M1, Graduate School of Public Policy).
Based on the idea that AI is a tool, we can think about responsibility in terms of both users and developers’ responsibility for the tool. In this context, individuals, companies, and governments will each need to take their own measures.
K.K., an M1 student at the Graduate School of Engineering, writes that it is irresponsible to attribute the responsibility to personal use alone, since AI will eventually intervene in every aspect of human life. If left to the current GAFA, the level of AI will improve based on the principle of competition and evolve to a level where it can coexist with humans.
In contrast, P.J., an M1 student at the Graduate School of Public Policy, said that based on his experience in economics, the way of letting the free market work is not the first way to go. The failure of a free economy, as seen in the example of Facebook, is one of the challenges of economics. Not only monopolies and oligopolies, but also platforms such as social networking sites are very difficult to transfer because of their network effects. Thus, legal measures must be taken. Moreover, in the age of AI and big data, information asymmetry between consumers and companies will become a major issue. As consumers are placed in an inferior position, it is necessary for education and media to work towards protecting consumers in a way that accounts for their specific characteristics.
Even prior to the advent of AI, information asymmetry was a problem in surveillance society theory, since tracking consumers is directly linked to profits. However, this problem entered a new phase with AI, as it can be performed more accurately, efficiently, and extensively. Those who develop platforms and products that include AI of any kind, not just marketing, need to consider their social impact.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
In the seventh session, Dr. Yanagawa spoke about the technology and its social implementation. There are three main processes through which technology is implemented:. (1) When the technology is expected to be profitable; (2) when the technology is not profitable, but the government determines it necessary and disseminates it; and (3) when the technology is offered for free and disseminated in anticipation of long-term profitability. (2) is particularly true in the case of military technology, as the government led its development, which in turn led to the development of the Internet. (3) is an implementation process that has not been discussed previously . The most common implementation process is (2), in which private companies often disseminate technology with the expectation of profitability.
AI technology is expected to solve a variety of problems. However, in cases where it is expected to generate revenue, there will be an industrial effort to create products from it, which will lead to a social economy.
For example, there is a debate about the extent to which a technology can be monopolized. Even if a company spends a lot of money developing a technology, if there are other companies that develop similar technologies, the return will be small and there will be no advantage for the company to develop the technology. On the other hand, competition is beneficial for consumers because it lowers the price of the product and makes it easier to spread. Therefore, the real challenge is to strike a balance between the two (summary by Mr. A. M., M1, Graduate School of Engineering).
In the discussion, the penetration of AI was revisited, but the question that repeatedly came up was what AI can do, or the extent to which AI can replace human capabilities?
D.I., an M1 student at the School of Public Policy, cites “sales” as a job that cannot be replaced by AI. For example, when we buy something while communicating with a sales clerk, we buy memories along with the product itself. Since human-to-human communication is more satisfying than communication with robots at the current level of technology, he argues that sales is a profession that uses strengths that AI does not have.
K.K., an M1 student at the Graduate School of Engineering, writes that when general-purpose AI appears, the focus will again be on parts of “humanity” that cannot be quantified, such as atmosphere and friendliness. More importantly, however, when other students pointed out the specific benefits of AI, he questioned whether the discussion was only imaginary. As such, even those who are interested in AI, such as the students in this class, do not know exactly what AI can and cannot do.
Indeed, given the ever-evolving and highly specialized nature of AI technology, it is not easy to follow its inner workings. In fact, it is difficult to say that those who discuss the social impact of AI are necessarily tech-savvy, and since the intersection between AI and society is a new field, the academic landscape is somewhat complicated. In other words, people in humanities fields will need to know more about the technology, while people in science and engineering fields will be required to consider ethics and society; more effective communication between these different actors is an urgent issue.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
The eighth lecture was provided by Dr. Matsuo. At the beginning, AI, which is good at processing symbolic systems, faced the following problem: “the more a child can do, the more difficult it is” (Moravec’s paradox). However, with the birth of the eye, a great leap forward was made in the intelligence of nearby motor systems. The birth of the “machine with eyes” is predicted to cause a Cambrian explosion in the world of machines and robots. In order to make this event an opportunity for Japan, we need to consider our own platform strategy.
Since continuous data collection is essential for “machines with eyes,” it is possible to charge people to operate them. I believe that a good strategy would be to create a platform for the entire “field” starting from the eye machine, and then expand it globally. What is required of a platformer in the era of deep learning is (1) to score skilled “eyes,” (2) to automate core tasks, (3) to optimize and automate the flow of the entire field, (4) to revolutionize the supply chain, and (5) to create platforms (summary by SM, M1, School of Public Policy).
In the discussion part of the class, there was a debate on what kind of entrepreneurial Japanese companies could become the next Google in terms of AI development.
T.O., an M1 student at the Graduate School of Public Policy, chose agriculture as one of several industries that require “eyes,” based on the fact that it was mentioned during the lecture that Japanese companies would be able to reach the top if robots with “eyes” were introduced into industries where tasks that require “eyes” are performed. In developing countries, where there seems to be a greater need to increase productivity in agriculture, industrial robots would be a powerful weapon.
On the other hand, Mr. K.Y., an M2 student at the Graduate School of Public Policy, wondered if Japanese companies and the Japanese government are aware that they now have such an opportunity, and if they are planning to fully utilize it. Even if a robot with “eyes” that can be used in agriculture is developed, it will be meaningless if farmers cannot actually adopt it. Therefore, we conclude that there is an urgent need for the government to create a healthy market environment in which market principles can work in order for farmers to increase their capital investment in AI technology to improve productivity.
Currently the use of AI in agriculture is still extremely limited, even in the dimension of data use. It is not easy for companies to develop and sell robots as the areas where Japan can show its strength are narrowing year by year, and if we look at farmers, some of their concerns include purchasing power, literacy, and technophobia. To address them, it is important for the government, companies, and other actors to actively support these areas.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants).
The ninth lecture was provided by Professor Sugiyama. The term “artificial intelligence” is widely used, and academically, it spans across multiple domains. Currently, the term “artificial intelligence” primarily refers to machine-learning techniques, the goal of which is to give computers the ability to learn like humans; the variations in learning techniques include supervised, unsupervised, and reinforcement learning. Although reinforcement learning has been studied for a long time, AlphaGO has triggered a boom in this field. Currently, the number of participants in international conferences on machine learning is drastically increasing, but the number of accepted papers is dominated by U.S. institutions, and Japanese institutions are therefore not able to make their presence felt.
RIKEN’s AIP Center is promoting the development of next-generation infrastructures for the next 10 years, and is conducting basic research to solve difficult problems using an alternative approach to deep learning. The latest research examples of machine learning include real-world applications that leverage big data, which is already being used for image recognition, speech recognition, and automatic translation. However, in many cases, supervised data cannot be easily collected in some fields, and new learning methods are required. For example, it has been found that classifying unlabeled data, which contains a mixture of correct and incorrect examples, can also improve accuracy, making semi-supervised classification possible.
Furthermore, mathematics is becoming increasingly important in AI research; from the perspective of human resource development, it is necessary to develop mathematical skills in addition to programming knowledge (Summary by Mr. A.U., D1, Graduate School of Interdisciplinary Information Studies).
The theme discussed after the lecture was how to train AI personnel. Since Professor Sugiyama talked about mathematical skills, the students largely echoed this point.
For example, T.O., an M1 student at the Graduate School of Public Policy, wrote that looking at the education of junior and senior high school students today, it is difficult to see how they could acquire sufficient mathematical knowledge. He argued that there is a need to shift focus from rote memorization, symbolized by the slang term “examination mathematics,” towards education that allows students to understand theory, and to increase the number of people who can understand the concepts of mathematics.
On the other hand, K.K., an M1 student at the School of Public Policy, wrote that it may be too late for those who have already entered the humanities department at university. ‘‘I am concerned that in an age where we can access information by simply searching for it, AI education opportunities and resources will not be utilized by those who do not voluntarily try to learn about new technologies.’’ Therefore, it would be more realistic to focus on current high school students.
Both of the editors of this text are currently conducting research on artificial intelligence and social relations. The author (Maeda) cannot claim that he is good at mathematics, but he can understand some statistics with the help of software. Mathematical ability includes various factors, such as computational ability, logic, and spatial awareness. As such, there seems to be a considerable difference between mathematical abilities in education and mathematical requirements in the field of AI development and research.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
In the 10th class, Prof. Fujii spoke about how science fiction novels have evolved along with the advancement of artificial intelligence technology, and have continued to suggest humanities questions surrounding the implications and nature of artificial intelligence technology, as well as solutions to potential problems. For example, the science fiction novel “Frankenstein,” which was first written by Mary Shelley in 1818, posed a question that is still debated to this day, “Will artificial intelligence (artificial life) rebel against humans?’’ Similarly, the 1920 “R.U.R.” depicted biomechanics in a very human-like manner and provided an interesting perspective on the essential differences between robots and humans, as well as the physicality of artificial intelligence. Although not science fiction, Dawkins’ 1976 work “The Selfish Gene” proposed the concept of “memes,” which presented the idea that culture and technology, like genes, can propagate and reproduce while resisting the pressure of competitive selection.
In his work “The Second Civil War,” Mr. Fujii suggested the possibility of divergent evolution, in which multiple technologies are used in many places, and the possibility of convergent evolution, in which competing technologies are gradually eliminated and one technology survives, in accordance with the theory of evolution. All the things depicted in the book can be seen as memes, each representing something.
Symbiosis between humans and memes is inevitable, but it is not inevitable that we should accept the tyranny of the self-replicating meme. This is because humans are the only species that can resist the tyranny of memes (summary by K.Y., M2, Graduate School of Public Policy).
As the theme of this class is artificial intelligence and society, some students viewed artificial intelligence technology as a “meme.” For example, Mr. H.K., a D2 student in the Department of Frontier Sciences, examined the artificial intelligence boom from the perspective of his affiliation with an engineering laboratory, and stated that both technological factors, such as the shift from hardware to software, and environmental factors, such as the tendency to overestimate, can be presented. By combining these decomposed factors, the outlook for the future can be significantly improved.
On the other hand, A.M., an M1 student at the Graduate School of Engineering, wondered whether artificial intelligence technology is appropriate in terms of being the technology that the meme originally targeted. He argued that while evolutionary selection is natural selection, artificial things such as technology and products can be intentionally selected; in particular, closed organizations are more likely to fall into the status quo, making it harder for natural selection to occur.
When considering the social impact of a new technology, the use of existing, well-known concepts and tools is extremely useful when there are no guidelines for consideration. However, as in this case, careful consideration must be given to whether it is appropriate to apply the concepts in question. As such, existing concepts may help us better understand new technologies, or existing frameworks may be extended to new technologies.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
The eleventh lecture was provided by Professor Yonemura. Laws related to medical care can be divided into two categories: medical administrative laws, which focus on administrative regulations in advance, and medical practice laws, which focus on judicial regulations after the fact. In addition, current AI in the medical field can be categorized into two types: medical professionals, patients, and caregivers.
AI for medical professionals includes both technology-assisted AI and decision-assisted AI, both of which are making remarkable progress and could produce programs that are more accurate than doctors in the future. However, the development of the latter requires a large-scale analysis of actual medical information, and there are currently difficulties in collecting large-scale medical information due to reasons such as the protection of personal information and the low level of digitization of medical record information. For example, diagnostic support programs using diagnostic imaging programs have not yet been developed with sufficient accuracy because of the slow progress in image collection and database creation, resulting in frequent oversights and misdiagnoses. An example of AI for nursing care is HAL, which has already been put to practical use in nursing homes to reduce workload.
The current legal issue regarding AI medical devices is the question of who should bear civil liability when AI intervenes in medical practice. The key issue is whether the developer, manufacturer, or user was negligent in relation to AI’s decision. In addition, based on product liability law, the manufacturer may be held liable if there is a defect in the design, instructions, o4 warnings (the summary was written by Mr. K. Y, M2, Graduate School of Public Policy).
The question of responsibility becomes an issue when medical decisions are left to AI. It is the responsibility of the physician as the user and the responsibility of the company as the developer of the AI. Responsibility always comes with a black box.
Specifically, Mr. A.U., a D1 student in the School of Interdisciplinary Information Studies, discusses this black box nature, first questioning how doctors can trust the decision support algorithms of medical devices when they become black boxes and when the basis and accuracy of their decisions are no longer reproducible. Diagnosis does not necessarily mean that physicians will be able to correctly understand and judge the results, which can be influenced by bias and noise. Furthermore, when probabilistic reliability outweighs human reliability, which should take precedence over the judgment of human doctors? It is likely that society would prioritize ethical and cultural values over probabilistic rationality.
On the other hand, A.H., an M1 student at the Graduate School of Public Policy, focused on the responsibility of the manufacturer. If the perspective of product liability is introduced, the incentive for development may decrease from the developer’s side, whereas from the consumer’s side, it is natural to want to deem a company responsible if damage is caused by a device. It would be necessary for the manufacturing and sales companies to assume responsibility so that it would not be a burden on the developers, but this would result in increased risk management costs, and this may only be feasible for large companies.
There are quite a variety of contexts in which responsibility is on ongoing issue. For example, medical care, which was the subject of this article, can literally be fatal, and that is why it is an area where the issue of responsibility is a serious one, even in the case of human doctors. At the same time, humans can legally bear responsibility for their own actions given they have free will and act as independent entities. However, when a machine like AI makes decisions, it is extremely difficult to determine who should bear responsibility and on what basis.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)
The last class was provided by Professor Ema. Currently, in Society 5.0, which is often mentioned in discussions about the ideal society and human resources, the need to develop AI and data human resources is often discussed. In this context, an important concept is the interaction between technology and society. Although we tend to focus only on the aspect of a technology-changing society, technology is determined by its interaction with the law, ethics, society, and the economy. For example, one of the most frequently discussed issues is whether AI will take away labor and employment, although in the short term, it is not jobs that will be replaced, but tasks. What is important in this case is determining the tasks that we want to replace and why. To determine what kind of society we want to create using technology, it is not enough to learn how the technology works.
In other words, in a society where AI and robotics are becoming increasingly popular, many things need to be taken into consideration, one of which is fairness. Although this has been discussed previously, it is expected to become more complex and visible as technology continues to develop.
Finally, he discussed how AI and robots can be applied and utilized in specific fields. Some examples were hotel services, predictive security, animal husbandry, and dairy farming. Based on this, he concluded that it is important for students to believe in their own values, to question the norm, and to work towards building bridges of connection (summary by K.N., M1, Graduate School of Public Policy).
The discussion after the lecture was on the theme of “Artificial Intelligence and Society,” which centered around the kind of theme that should be chosen if we were to create/apply for a public call for research projects from private foundations instead of government research funds.
D.I., an M1 student at the School of Public Policy, thought about taking on projects that for-profit companies would not touch, given the nature of a foundation. In particular, it takes a long time for education to produce results, but it is a very important activity that will contribute to the future. As such, he is of the belief that it is important to focus on educational materials and education to create companies that can surpass GAFA and develop human resources that can lead the world.
K.Y., an M2 student at the Graduate School of Public Policy, focused on private actors who are neither governmental nor corporate players. These include private foundations and associations, researchers, research and educational institutions, NPOs and NGOs, and financial institutions and investors. Private organizations inevitably tend to be numerous and diverse, and therefore they move in uncoordinated ways. Even though the field of artificial intelligence technology is an important field, it still seems that international cooperation is still lagging behind other international issues, such as environmental and economic issues. We need a system and human resources that can lead to this cooperation.
Research on the subject of artificial intelligence and society is a field that is still in its infancy. A large number of people are aware of this technology’s potential to penetrate every aspect of our lives, as well as the threat it poses. In order to dissect the relationships between technology and our lives and understand the related problems, there is a need to share knowledge from diverse fields and coordinate cooperation.
(Editing/writing by Haruka Maeda and Takuya Mizukami, teaching assistants)