2020 Schedule
Lecture Report
In the second class, a roundtable discussion was held on the theme of “What is Trusted AI?” hosted by Professor Ema along with Professor Kuniyoshi (Intelligent Systems), Professor Sakura (Social Theory of Science and Technology), and Professor Shiroyama (Science and Technology Policy) as members.
First, Dr. Ema presented the topic’s premise. In recent years, technological advances have increased the number of things that can be achieved with artificial intelligence (in this case, machine learning). Currently, it can be broadly divided into recognition, prediction, and even the execution of simple tasks, but there are also many challenges. False recognition, for example, can pose a threat to safety and robustness. Furthermore, it is possible to artificially cause misrecognition, and it has been pointed out that misrecognition can become a serious social problem.
One way to address this is to have the person explain the basis of taking a certain decision. Modifications to the system, for example, can be addressed if the cause of misidentification is known. In addition to technical measures, widespread efforts have been made, such as the creation of ethical guidelines by academic societies and companies, which can be summarized as a movement to create “trusted, high-quality AI.” The essential question here is: What is trusted AI?
In this regard, Dr. Kuniyoshi presented some elements of trust. Trust can be supported by explanations, and there are other factors, such as reliable operation and not having dangerous results. The most important of these is the human empathy that we feel when we draw robots closer to us. Therefore, understanding human trust is necessary.
Dr. Sakura agreed with Dr. Kuniyoshi’s opinion that physicality, which is the foundation of empathy and sympathy, is important, but he also mentioned the potential dangers of changing the image of humans by simply changing the design, even if the robot retains similar functions. This could lead to the possibility of robots deceiving humans, so both designers and users of the technology need to be careful.
Dr. Shiroyama spoke from the perspective of trusted subjects and objects. In this case, the object of trust can be AI as a system, or the people and institutions behind it. When advanced technology is introduced, it needs to be accompanied by insurance and safety regulations to compensate for the risks, just like cars and nuclear technology. Ultimately, the ability to create an ideal control mechanism is also a factor of trust. In doing so, we cannot dismiss the issue of who it is ideal for.
Based on the above, the discussion continued to focus on empathy, which plays an important role in the relationship between humans and AI. One of the objectives of implementing empathy is that a robot’s empathy for humans may get in the way of carrying out its objectives in some fields, such as in the military. Dr. Kuniyoshi expects that implementing humanity in robots will constrain their behavior.
Dr. Sakura pointed out that depending on the field and culture, the actual use of AI and robots and how they should be used varies considerably. In addition, technology may be used differently from what the creator intended, and the technology itself may change in response. Therefore, it is also important to first determine what the user wants to do with the technology (i.e. their intended purpose).
Dr. Shiroyama focused on empathy. One must first consider whether empathy refers to humans empathizing with robots, or robots empathizing with humans. If it is the latter, does it not require a more complex model than the one currently in use?
Currently, the academic community is proposing complex models that mimic the human emotional system. The use of such models, which take into account the characteristics of each individual, will make it possible to deceive humans. It has been pointed out that humans can become emotionally attached to machines based on their superficiality, and there is no way to control this tendency. The dangers of producing deep models exist, but given that there are limits to this tendency, there is also the possibility that we could improve the technology.
At the end of the session, each professor gave participants a message. Dr. Kuniyoshi talked about the need for students who are responsible for technological development and students who think about social systems to discuss and work based on a common understanding, because discussions about AI immediately lead to discussions about humans. Dr. Sakura said that there are some things that are the same as in older cases and some things that are different, but the discernment itself is influenced by culture, so people from various fields need to contribute knowledge from multiple angles. Dr. Shiroyama also pointed out the possible need for more in-depth AI technology given that the population is expected to continue declining. Thus, his message focused on encouraging students to think about forming new connections.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The third class was taught by Dr. Masashi Sakura of the Interfaculty Initiative in Information Studies. He talked about the relationship between artificial intelligence and culture, as well as the relationship between society and COVID-19, which is currently much discussed from the standpoint of social theories of science, technology, and society.
At the beginning of the lecture on the relationship between artificial intelligence and culture, he made reference to the relationship between Japanese and Western images of mothers and children. After pointing out that Japanese mothers and children emphasize co-viewing the third event together, whereas Western mothers and children face each other one-on-one in a closed space, he applied this mother-child relationship to the relationship between robots and humans. In Japan, the relationship between humans and robots is that of equal companions who can share a common third term, while in the West, robots are elements that form part of a closed world. He also said that in the West, the superiority and inferiority relationship between robots and people is not known, and when robots become more capable, they could easily be treated as a threat. This raised the question of whether attitudes to science and technology are universal or if they differ from region to region.
After the lecture, the participants made the following comments:
Mr. R of the Graduate School of Information Science and Technology believed that cultural differences do not have much influence. This is because science and technology are developed for the common purpose of “benefiting humanity,” and there seems to be little room for individual cultural factors to intervene.
In addition, Mr. H from the Interdisciplinary Information Studies Department felt uneasy about whether he would really be able to understand the thoughts of people in Europe and the United States when he stood in the position of doing research and development in the future. Since it is difficult to overcome cultural differences and to fundamentally understand subjective ideologies, it is necessary to focus on advancing the penetration of technology in one’s own country. However, this understanding and improvement may be very difficult.
In a lecture on the relationship between COVID-19 and society, it was pointed out that public health and individualism, which guarantees individual freedom, are incompatible. To thoroughly quarantine people, information technology must be used to monitor their behavioral history, in which case privacy protection cannot be guaranteed. In response to this, there was a discussion about public welfare and personal freedoms.
Mr. N. from the Graduate School of Information Science and Technology emphasized public welfare and stated that it would be a well-balanced policy to present information in a semi-compulsory manner after establishing a legal system, thereby limiting the purposes for which it can be used. Furthermore, Mr. I of the Graduate School of Arts and Sciences said that citizens need to become more IT literate in their daily lives in order to correctly understand what the information society is, its convenience and dangers, and protect their right to self-determination, which is the root of democracy. This would be an alternative approach to relying solely on policies for emergencies.
As far as COVID-19 is concerned, it is difficult to strike a balance since it simultaneously involves public welfare and individual lives. In addition, the daily collection of knowledge and the creation of a platform for this by the government may be one way to prepare for such an emergency.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
This lecture was provided by Professor Kawaguchi (Graduate School of Economics). The lecture was divided into two halves and focused on how employment could be affected by the introduction of new technologies, especially robots.
In the first half of the lecture, the professor introduced a study that showed how 49% of the future workforce could be replaced by artificial intelligence and robots. However, we still need to study how this will actually occur in practice. In terms of the introduction of industrial robots, the impact differs from country to country. In Eastern parts of the United States, the unemployed population has increased with the introduction of industrial robots, especially in areas where the automobile industry is active. In addition, he said that people with lower levels of education are more likely to lose their jobs than those with higher levels of education, and they are also more likely to lose their wages. Germany’s manufacturing industry was also greatly affected by robots, but other industries, such as the service industry, saw an increase in employment, so the overall employment situation was largely unchanged. Japan, on the other hand, has seen an increase in employment since the introduction of robots. Based on these results, we discussed why the introduction of robots had a different impact on employment in each country. In the second half of the lecture, we discussed how predictive AI would affect employment, including whether prediction and decision making are separate (editor’s note: subjects), and how remote work due to the pandemic will affect the future of employment (summary by Mr. W. of the Graduate School of Engineering).
In the context of economics, the analysis is mainly done from the perspective of costs and benefits, but given that the results could be compared in different international contexts, some participants focused on culture. Mr. T, from the Graduate School of Information Science and Technology, mentioned that compared to Germany and the U.S., Japan’s social system was designed to be friendly to low- and middle-income earners. Therefore, he speculated that redistribution would be successful even if jobs are replaced, and he also emphasized the importance of having an appropriate social system in place.
Mr. I, from the School of Interdisciplinary Information Studies, pointed out the difference between correlation and causation, in addition to the argument above about there being different results in different countries. He also said that extracting causality from correlated data requires a higher level of interpretation, which is why humans are needed.
Mr. I of the Graduate School of Arts and Sciences also pointed out the need for humans. In particular, he said that humans should make decisions and AI should be a supporting device in fields that involve human autonomy (such as medical care and court cases) and fields with high potential for change (such as education). This is because there are areas where predictions can be left to AI and areas where it cannot.
While decisions made by machines are expected to eliminate irrational human decision-making, many people believe that machine decisions have their own drawbacks. The criteria for when to allow AI to make decisions will need to be decided by humans, not by AI.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The fifth class was taught by Dr. Tatsuya Harada of the Research Center for Advanced Science and Technology. He talked about understanding the real world through machine learning.
First, he explained the research steps and hardships of real-world cognitive intelligence that occurred more than 10 years ago. He also explained the basic points of replacing two-dimensional information with three-dimensional interpretation.
The focus of the lecture was to improve prediction accuracy, as having more processed information usually results in less accuracy. In addition, the original input needs to be provided by humans, and it is expensive to provide it in large quantities. Therefore, the challenge is to secure high-quality input, particularly in light of the multilabel PU problem, which holds that although humans are not capable of conveying everything, they are not necessarily wrong. In addition, it is also necessary to understand that the perception of the same object may change depending on the function that we focus on, and the results may also change depending on the level of attention (i.e. there is a “context that is not a sentence”).
In the case of datasets with additive properties (e.g., sound sources), BC learning means that it is relatively easy to improve accuracy even if the amount of data is limited (the summary is modified and quoted from a comment by Mr. N of the School of Public Policy).
In other words, humans seem to have high generalization abilities, whereby they can learn from little or limited data. This fact begs the question of what algorithms, theories, and concepts we need to teach computers such abilities.
Mr. M of the Graduate School of Engineering learned that the methods of inverting information and learning with information that is intentionally given random noise have resulted in relatively high accuracy. As such, he expressed his thoughts from the technical side and his interest in actual events, while also emphasizing his desire to learn more about these processes.
Furthermore, Mr. K from the School of Interdisciplinary Information Studies said that compared to recognition and learning by AI/robots, humans use “physicality” and their “five senses” when they recognize and learn, which are key advantages. As such, he believes that we can design algorithms based on the processes through which infants learn to recognize things. In addition, he expressed his doubts as to whether it is possible to incorporate human conceptualization and abstraction into the algorithm, and if so, how.
Mr. W of the Graduate School of Arts and Sciences pointed out that when we assume that the ideas of AI can also be conceived by humans, the future issue becomes centered around how humans can use AI. This may also be a part of our quest to “master AI.”
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The sixth lecture was given by Dr. Kaori Karasawa (Graduate School of Humanities and Sociology), who spoke about the relationship between AI and humans from the perspective of psychology.
Of course, AI/robots do not have a “mind” of their own, but it is not uncommon for humans to perceive them as having a mind. The perception of mind has two functions: (1) agency, which is the mind to act (i.e. behavior), and (2) experience, which is the mind to feel. In the case of human-shaped artifacts, the more human-like the perceived object, (e.g. the robot’s appearance), the easier it is for humans to perceive them as having a mind. As an example of how creators use this tendency positively, an application in the medical and welfare fields was introduced. In particular, he introduced AI/robots that promote mental perception, such as Paro, a care robot for the elderly, which has been effective in reducing agitation and depression in dementia patients. In the latter half of the lecture, examples such as AIBO are discussed. In addition, based on the idea that AI/robots have moral status, he mentioned that humans feel distress when they see harm done on robots, for example, in memory erasure and hammering experiments. This suggests that humans are not yet able to treat AI/robots as objects and that they could become more like humans through social interaction. Some studies have shown that this trend is likely to increase, and that they may become objects of friendship and romance (summary by Ms. M., School of Public Policy).
The fact that robots could become objects of friendship and love seemed to leave a strong impression on the students, and there were many comments on this point. Mr. W of the Graduate School of Information Science and Technology wrote, “Personally, I find it very frightening. I am worried about the loss of humanity by associating with robots that behave in a certain way.”
However, there are times when the validity of such a problematic view is brought into focus. Mr. T of the Graduate School of Information Science and Technology states that humans have the right to choose their own friends and lovers, and that it is not up to others to tell them what to do, much less to discriminate against them for dating a robot. Mr. T pointed out that the reason for discrimination is the physiological and belief-based aversion to something that is different; such feelings may form the basis of discrimination against people who date androids in the future.
How should humans deal with such problems? Mr. N of the Graduate School of Engineering suggests that we should deal with AI “as if we were raising a child. It can be said that long-term observation of the situation is required to properly deal with unexpected problems that arise during implementation.”
Recently, the news of a person who had a wedding ceremony with Hatsune Miku, a characterization of speech synthesis software, became a popular topic. What can be learnt from this is the diversity in marital relationships. When we dismiss someone/something as being “inherently unmarriable,’’ what are we basing this belief on? Are all the people who have been married so far “the ones who could/should be married”? If so, who/what made that decision?
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The seventh class was taught by Professor Hideaki Shiroyama of the Graduate School of Public Policy, who talked about AI governance and its challenges.
The first half of the lecture dealt with how to create governance for AI. Technology needs to be analyzed and discussed not only at the implementation stage, but also at the technology research and policy stages to determine how it will affect society. AI governance has both soft and hard law aspects; the former includes R&D principles and utilization principles, while the latter includes risk management. Regarding the soft law aspect, there are various differing approaches; for example, whereas Japan has the principle of “human-centeredness,” China respects the principle of “harmony and human friendliness.” In addition, discussions are being held not only domestically, such as in the OECD and G20, but also on the international stage. As such, the class discussion focused on the role of international organizations and private organizations in AI governance.
The second half of the session dealt with the role of AI in governance and public policy. While there are issues such as biases and algorithmic decision control, there have also been attempts to compensate for these issues. In the discussion, we talked about what kind of public services AI and robots could play a role in. It was pointed out that AI can play a role in many tasks, such as replacing the routine work of civil servants (summary by S.K., School of Public Policy).
Mr. S.N. from the School of Public Policy said that he was impressed by the principle of “Harmony and Human friendliness” and China’s advocation for cooperation between AI and humans. As AI is a technology created by humans for their own use, he thought it would be natural for it to be written in a human-centered way like Japan’s principles, but he was surprised to learn that there were other views.
Some students made implementation proposals. Mr. K from the Graduate School of Information Science and Technology pointed out that the Super City Law, which was passed by the Diet in 2020, sought to set up experimental legislation to utilize big data in the activities of private companies and public services. Mr. K thinks it would be a good idea to first try living with technology in a boxy environment, and then develop it based on the positive and negative findings. Furthermore, the current trend is to convert existing municipalities into “super cities,” but since it is difficult to get the existing residents to agree, the ideal situation would be to build a new city and have them move in.
Finally, I would like to introduce those who are focusing on responsibility. Mr. Q of the Graduate School of Public Policy pointed out that having private institutions such as corporations be the main actors in AI governance requires them to consider issues that are common across all countries (i.e. fairness, ethics, accountability, and transparency), especially considering that the characteristics of AI technology have a significant impact on governance. He pointed out that it may be necessary to give sufficient consideration to “fairness,” “ethics,” “accountability,” and “transparency,” which are common agenda items for each country.
Various different actors are involved in the process from developing AI to implementing it in society. To make it easier for them to progress and achieve more effective AI implementation, it may be important for various stakeholders to commit to co-creation.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The eighth lecture was given by Mr. Dai Goto, a lawyer. He talked about legal issues surrounding the use of artificial intelligence.
The lecture covered legal issues surrounding AI and regulatory approaches, as well as the process of generating specific learned models, patents, AI development contracts, and data contracts. In the first half of the session, democracy and the use of AI came up on the agenda in relation to the Cambridge Analytica case, where data was manipulated to deprive people of access to different opinions. He pointed out that there is a danger of losing the legitimacy of democracy through such applications of AI. He also introduced the issue of data being able to expose things that we would normally want to keep hidden in our minds and not let anyone know about. In the latter half of the presentation, he explained the policy and ethical guidelines for AI, accidents and liability issues of automated driving, as well as the verification of the trolley problem using automated driving. In Germany, when an unavoidable accident occurs during automated driving, it is strictly prohibited to make evaluations based on personal characteristics, such as the age and gender of the person involved in the accident. He also pointed out that this issue has not yet been fully discussed in Japan. As an example, we discussed the pros and cons of introducing AI-based emotional analysis in public high school entrance examinations. There was a discussion on whether information on emotion analysis was necessary in the first place, and whether it should be used as a reminder to change interviewers because both humans and AI make judgments, and even humans have biases in this regard (summary by Ms. N, Graduate School of Arts and Sciences).
The students were particularly interested in the idea of introducing emotion analysis in interviews. Mr. K from the School of Public Policy stated that it can reduce the risk of students lying to the interviewer and remove the interviewer’s biases. On the other hand, he mentioned the individual differences between students, the risk when the AI’s evaluation criteria are known to only a few people, and the possibility that it does not guarantee students’ “right to silence”. Therefore, there are still many risks associated with implementing this technology.
Do users provide sufficient consent when introducing such technology? Currently, terms of use are used as a means of consent, but Mr. N of the Graduate School of Information Science and Technology stated that it is not realistic to make users read all the terms of use (in other words, to leave it up to them). In addition, it may be necessary to regulate companies by requiring them to clearly state the purpose and method of use, or by requiring a third party to conduct an audit.
The law can be an important tool for such regulation. However, according to Mr. S of the Graduate School of Engineering, since AI is a new technology, it may be difficult to enact laws and regulations and incorporate them into the judicial system. To speed up the process of regulation, it would be more realistic to revise existing laws and use soft law rather than enacting hard law.
Technology has no borders, but laws are formed on the basis of a country’s culture, so the challenge may become how to ensure that laws and ethics are consistent across different countries.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The tenth class was taught by Professor Hideki Koizumi of the Graduate School of Engineering. The themes were smart cities and regions, and artificial intelligence.
In the first half of the session, we considered “Expectations and Challenges for Smart Cities and Regions and the Use of Artificial Intelligence.” First, there are three types of smart cities: 1.0: e-government smart government; 2.0: smart grid to energy management; 3.0: innovation through public-private partnerships in cities and local governments; and 4.0: cities with smart or innovative human resources. 4.0 is a city with a concentration of smart and innovative human resources. To develop from Type 2.0 to 3.0, it is necessary to utilize IoT and AI, as is being done in the EU, the US, China, and other countries, and to understand the movement of “people, money, and things” through sensing, which is a challenge for the development of smart cities in Japan. In addition, through many case studies, various issues related to smart cities have emerged. Specifically, there is the issue of the future vision of a tangible city, the issue of citizen participation, decision-making, and governance, and the issue of monetization and business models.
In the second half of the session, we considered the image of cities and regions in the pre- and post-COVID-19 eras and how they can be made smarter. First, the urban lockdown caused by the pandemic gave us an opportunity to reconsider the essential values of cities, such as place and movement (Link). In addition, we considered the “Conditions for Urban Innovation” and “Requirements for Ensuring Urban Diversity” based on Jacobs’s urban theory (the summary is by Mr. I of the School of Interdisciplinary Information Studies).
The following are some comments from the class. Mr. I from the School of Interdisciplinary Information Studies said that when he heard the concepts of place and link, which are smart city values, there was an overlap with the Internet. The opinions in the discussion, such as “online callers become a limited community” and “there is no chance of discovering a good place by chance,” may be similar to the filter bubble phenomenon on the Internet. The lifestyle during the pandemic and the form of the smart city may be similar to that of the filter bubble.
Mr. S from the Graduate School of Interdisciplinary Information Studies recalled that in a previous lecture, he had said that COVID-19 was forcibly warped into a world 10 years in the future. Although our life has not become much smarter through data, he said that he felt that our lifestyle had become more like a smart society through things like remote classes and work.
There was also an important point made related to life and social stratification. Mr. Y of the Graduate School of Engineering believes that one of the ways to protect yourself is to integrate data on the degree of damage to your location, the degree of congestion in transportation systems, and the acceptance of shelters, and to design the most rational evacuation route for each individual through simulations. However, he also pointed out that people who do not have mobile devices or cannot use them properly will become socially vulnerable, which is a problem that needs to be addressed.
Devising groundbreaking plans requires having the qualities of a romantic. You will be able to create something better if you keep that perspective in mind, but also have an eye for the practical and those that are downtrodden.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The eleventh lecture was given by Dr. Yusuke Inoue of the Institute of Medical Science on the topic of AI and ethical issues in medicine.
The first thing he talked about was how AI is being introduced in the medical field and what issues are currently being faced. There are great expectations for AI in medicine, but the truth is that there have always been doctors who wanted machines to help them diagnose. What is important is the role of doctors and the position of AI as a medical device that requires government approval. Under Japan’s current laws, the doctor must be the one to make a diagnosis. In addition, according to the approval system for devices in Japan, it is difficult to continue using software, as its performance changes drastically after approval; it is therefore important to keep track of changes through learning. Thus, while making use of the possibilities and advantages of AI, there are many issues other than the level of technology, such as how people should be involved and how to understand the behavior of devices.
The following is the result of a survey conducted by a research group of the Ministry of Health, Labor, and Welfare regarding the perception of AI. According to the results, there seems to be a lot of disagreement between doctors and patients. Some doctors believe that AI should be used for the benefit of patients, even if they do not understand the process leading to the results. The doctor said that this was paternalistic. On the other hand, he thinks that citizens should prioritize the wishes of patients in deciding whether or not to use such AI. As for whether AI should be used to watch over the elderly, there are not many people who would prefer to be watched by non-humans. It was also revealed that some elderly people chose AI because they trust the function of AI more than people, while others choose AI because they do not want to burden or bother people. The professor asked the audience to think about whether technology is actually narrowing people’s choices, even though its intention was to expand the range of choices (summary by Ms. T of the School of Public Policy).
When discussing the practical use of AI, the focus is generally on the question of whether it is possible in terms of implementation. However, before that, there are situations where we have no choice but to rely on AI, which Mr. O, from the School of Public Policy, pointed out. Of course, it is important to consider the impact of AI, but the quality of medical care that patients demand from their doctors ranges from technology and treatment to informed consent. Therefore, we need to decipher what doctors can do and what patients want, and then use AI to bridge the gap.
What do patients want? If patients want care, it may be difficult to introduce AI as a tool, according to Mr. K of the School of Public Policy. Above all, there is a fear that the cost of healthcare could become enormous due to the risk of litigation. Depending on whether the patient is result-oriented or process-oriented, as well as the type of medical condition and treatment, the adoption of AI may or may not be successful.
To deal with such risks, Dr. S. from the School of Public Policy proposed an insurance system in which a surcharge is set for medical treatment using AI, a portion of which is set aside as an insurance fund, and benefits are paid from the fund in the event of a medical accident caused by AI. The problem here is that the responsibility for misdiagnosis by AI is unclear, but rather than trying to solve this problem head-on, he suggests creating an insurance system that appropriately distributes risk.
In Japan, where the birthrate is declining and the population is aging rapidly, the medical situation is expected to become more serious in the future, which is why we are waiting for the use of technology to reduce the burden. It would be desirable to introduce the technology as soon as possible, while also ensuring sufficient discussion.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
The 12th class was given by Prof. Yasuo Kuniyoshi of the Graduate School of Information Science and Technology, who gave a lecture on the pandemic’s ‘New Normal’ based on AI/IT, Human Qualities, and their Developmental Origins in the Age of AI (Physicality and Sociality, Emotion and Creativity, and Humanity).
The class discussed human qualities and the developmental origins that will be challenged in the age of AI, as well as the impact of going online and the impact of technological advancement on society. Technological innovation can cause social problems such as unemployment and widening income disparity. In order to survive such a situation, it was explicitly stated that we should focus on jobs where there is flexibility to plan and design the work. In addition, AI is technically an algorithm of search, inference, planning, and optimization, and the quality of the data and clarity of the defined tasks are important for obtaining good results. However, problems with reliability and security also exist. He also gave examples of biological evolution and neural circuits to introduce the basic principles of a human-like robot that can use its body and evoke emotions. Finally, we discuss the direction in which humans and society change through the utilization of AI and robots (summary by Mr. K, School of Public Policy).
Some of the students took this lecture as a means to ask the essential question about what human beings are and what intelligence is. Mr. T from the Graduate School of Information Science and Technology asked whether this decision-making entity feels emotions and makes rational decisions, and whether it really exists in the first place. It is true that human reason and consciousness are chemical reactions in a specific part of the brain, but many people may not be willing to dismiss them as such. The more I think about it, the more I think about what I am.
From the standpoint of the creator, some researchers have focused on physicality. Mr. W of the Graduate School of Information Science and Technology said that the idea of bottom-up emergence, which Dr. Kuniyoshi focused on, could be very useful and applicable to researchers who create software. More specifically, he commented that the problem of adversarial samples and the problem of explainability remain major issues in deep learning, which can be improved by incorporating the perspective of physical cognition, such as affordances, into modeling.
This year’s lecture encompassed a wide range of topics from the philosophical to the technical, and therefore the thoughts and impressions of the students were diverse. Lastly, I would like to introduce the opinion of Mr. M from the School of Public Policy.
The professor expressed his opinion that there is a degree of freedom in the design of AI, and while setting and designing more appropriate solutions to problems, all humans should think, discuss, and self-determine by asking questions such as “what ways of being human should not be removed?”, “what should be done?”, and “what direction do we want the technology to go?”. Mr. M, a student of the School of Public Policy, said that he personally agreed with the importance of all people thinking collectively, but he wondered how this could be achieved in practice. He pointed out that this is an area where there is a great deal that can be done by industry and academia, especially since it is often difficult to do so across national boundaries, and between governments with complex interests.
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)
For the 13th and final lecture, we invited Dr. Yushiku Inoue from NHK to talk about the AI Misora Hibari, which participated in the 2019 Kohaku Uta Gassen, as he was involved in its development.
The development of AI Misora Hibari was triggered by a colleague of Dr. Inoue, who wanted to have an AI ring the bell at NHK’s throat singing contest. This inspired the idea of incorporating AI into entertainment, and the project proceeded with the aim of showing the ambiguity and versatility of AI. On December 31st, 2019, AI Misora Hibari participated in the NHK Kouhaku Uta Gassen (Red and White Song Contest), which was viewed by many people. AI Misora Hibari’s appearance on Kohaku Uta Gassen was met with a lot of debate and many negative reactions from those not present at its unveiling in September 2019. With this background in mind, the first half of the discussion focused on the reasons for this controversy. The negative views included the inability to obtain the person’s consent for the lines and portrait rights, and antipathy towards the emotional aspects. It was suggested that ways to mitigate negative views could include coordination and legislation, such as discussions among stakeholders, as well as showing the story and conveying enthusiasm. In the latter half of the discussion, there was a debate on what kind of exhibition could promote social acceptance of AI, with opinions such as conveying comments from the creator’s side, visualizing comments from the recipient’s side, and creating a sense of unity by doing a duet with other artists (summary by Ms. S, School of Public Policy)
Since the release of AI Misora Hibari on NHK, many have focused on the reaction of the public. For example, referring to speculative design, Mr. S of the Graduate School of Public Policy, proposed a method of design that presents the possibility that “the future could be like this,” to present visitors with the discussion that took place around Kohaku, and ask them, “What do you think?”. By taking advantage of it being online, it would be possible to see other visitors’ comments and to hold workshops on the topic.
Mr. O of the School of Public Policy believes that it is not okay to ignore viewers’ feelings of “somewhat disliking” something or promoting its social acceptance despite such opinions. As a public broadcaster, NHK is in a different position from the media run by for-profit corporations. Therefore, it is possible for NHK to take a unique position in the midst of all the controversy. Both sides of the argument can have the effect of changing reactions from “I kind of hate it” to “I hate it for this reason.” He concluded that rather than squelching these ambiguous feelings, it would be beneficial to provide an opportunity for viewers to think about why they disliked it and what form it should take.
Mr. W of the Graduate School of Information Science and Technology pointed out that there seems to be one assumption in the story so far. The premise is that AI Misora Hibari is not blasphemous and therefore should be promoted for social acceptance. It would not make much sense to try to tell those that think it is blasphemy how wonderful AI Misora Hibari is. Of course, there may be some overlap, but their resistance may be stronger and deeper than those who find it unacceptable. Therefore, it is worth thinking about what a “good exhibition” constitutes for these people.
This lecture was different from the academic analysis and perspectives that have been used in the past, and it was a valuable opportunity to hear the voices of people who are actually working in society. The lecture was televised and became popular on social networking sites, receiving both support and disapproval. I believe that participants were able to feel more familiar with the topic through these discussions, rather than it being “something from a distant world.”
(Editing/writing by Haruka Maeda and Chihiro Nishi, teaching assistants)