2019 Scuedule
Lecture Report
Arisa Ema (Part-time Lecturer, Graduate School of Arts and Sciences/Specially Appointed Lecturer, Research Center for Future Vision)
Tadashi Sakura, Professor, Interfaculty Initiative in Information Studies
Eiaki Shiroyama (Professor, Graduate School of Public Policy)
In the first class, Dr. Ema began with a lecture on artificial intelligence technology and society; comparing the market capitalization rankings of the 1990s and recent years, the industrial structure has changed dramatically. Dr. Ema pointed out that Japan has been lagging behind in flexibly tackling new interdisciplinary and fusion research, as much of the research has been conducted in accordance with the traditional academic system.
A major application of artificial intelligence technology is image recognition, but recently there have been problems with “tricking” image recognition technology by learning. Other problems have already emerged when machine learning permeates society, such as Microsoft’s “Tay” flameout, Google’s “gorilla problem,” and Deep Fake, which enables the falsification of videos. These cases involve problems on the algorithm side and with the quality of the data with which it is trained. To deal with these problems, collaboration among industry, academia, government, and the private sector is required, not only in terms of technology, but also in terms of humanities, social science knowledge, and the state of policy.
In response to these issues, a roundtable discussion was conducted. Dr. Shiroyama commented on the necessity of “co-evolution” including institutional and legal aspects, without being biased toward technology determinism. “Framing” is important—that is, how to position the same technology as something that can be used in any situation. Dr. Sakura commented on the unchangeability of the human side based on history. Even before machine learning, photo collages and fabrication existed. While focusing on the characteristics of new technologies, it is important to consider what is a new problem and what is not. Technology is like a mirror that reflects our humanity, and in the end, we have to face the problems we have.
Another point of discussion was the cultural differences in robotics and artificial intelligence technology, especially what is unique to Japan. While Dr. Sakura cautioned that it would be dangerous to make easy assumptions, he pointed out that the Japanese people’s traditional “animism” may be deeply involved. For example, the “tripod” relationship between humans and machines, as typified by the “weak robot” of Prof. Okada of Toyohashi University of Technology, may be unique to Japan. Dr. Shiroyama commented that no matter what new artificial intelligence and robotics technologies are developed, the attitude toward technology reflects the inherent philosophies of each culture. Cultural differences between industries are also important, in addition to cultural differences at the national level. Currently, there is a need to bridge the gap between the culture unique to the IT industry and the decision-making process in the medical and transportation fields, which require a high level of safety.
Dr. Ema concluded by saying that he hoped the class would help the students to think about the perspectives and methodologies needed in a society where cooperation with people from different perspectives, such as individual values, industry, academia, and the nation, is becoming more important.
(Editing/Writing: Takuya Mizukami, teaching assistant)
The second class was provided by Prof. Yasuo Kuniyoshi of the Graduate School of Information Science and Technology. The title of the lecture was “Future of AI, Human, and Society” and he gave a lecture on technical and social problems with machine learning from an engineering perspective.
Machine learning is at the center of artificial intelligence these days, and deep learning, in particular, is at the core. Adversarial examples, which distort the results of image recognition through deep learning by adding noise invisible to the human eye, and problems with reward design in reinforcement learning are some examples. These are mainly technical problems, but they have also led to problems with social impacts, such as Google Image Search showing only white women when searching for “grandma.”
Based on the above, Dr. Kuniyoshi pointed out that modern artificial intelligence lacks the ability to cope with the “unexpected” and the so-called human mind (common sense, empathy, ethics, etc.), which is why robots and artificial intelligence should “dare” to have a human mind. To achieve this, it is important to combine the approach of describing human knowledge and the approach of understanding and implementing the root of human intelligence, and Dr. Kuniyoshi is working on the development of robots that can move emergently without specifying specific actions based on their physicality.
The discussion in the second half focused on the pros and cons of the claim that artificial intelligence and robots should have human-like minds, based on the lecture in the first half. Regarding this argument, there was an opinion that artificial intelligence and robots having a mind and intellectual abilities equivalent to humans will lead to improved performance of automatic driving programs and contribute to the treatment of intractable diseases in the medical field (O. S., first-year student of the School of Public Policy). On the other hand, there was an opinion that it is possible to systematically react to the movements of the human mind without understanding it and make it function like a human being (F. R., first-year student, Graduate School of Engineering). It was also suggested that artificial intelligence may make the same mistakes as humans because it has a “human mind” or that it may not achieve its original purpose (e.g., it may not like the work it is given) because it has a “mind” (T. L., first-year student, Graduate School of Public Policy).
We must also devote our efforts to this fundamental debate in artificial intelligence research, because it is not necessarily the case that society will be better off if computers have the same mental abilities as humans.
(Editing/Writing: Takuya Mizukami and Haruka Maeda, teaching assistants)
The third class was taught by Professor Shigeto Yonemura of the Graduate School of Law and Political Science, who gave a lecture on legal issues related to artificial intelligence. Specifically, he discussed the issue of liability for accidents and damages associated with the use of artificial intelligence from the standpoint of law.
In Japan, general tort liability is widely stipulated in Article 709 of the Civil Code. It is characterized by the requirement of “negligence,” and the presence or absence of “fault” in some sense is an important factor in determining whether liability exists. In addition, there are three types of “defect” in products: manufacturing defects, design defects, and defects in instructions and warnings. With respect to any of these types of liability, when the liability of multiple entities is in question, the liability of each entity is in principle judged independently.
According to Professor Yonemura, in the case of artificial intelligence, if it is assumed that a human will make a new judgment later, even if there is a malfunction, it does not immediately mean that there is a “defect,” and the manufacturer’s liability under the Product Liability Law is unlikely to be established. However, if the operation of the artificial intelligence is extremely dangerous, or if the judgment of the artificial intelligence is extremely inappropriate and there is a risk of misleading the user, a defect may be recognized. Even if the defect is affirmed, if it is a defect that cannot be recognized by scientific knowledge at the time the product was manufactured, liability may be denied by the “defense of risk of development” under Article 4 of the Product Liability Law, but this defense is rarely accepted in Japan. If product liability is not established, the only question is whether the user of the device incorporating artificial intelligence is liable.
Lastly, there was a mention of the black box nature, known as a peculiarity of artificial intelligence issues. According to Professor Yonemura, this is a characteristic that can be seen in cases of compensation for damages such as medical accidents and nuclear accidents, and it is not unique in terms of the need for accident countermeasures.
In the latter half of the class and in the discussion afterwards, various opinions were exchanged about the nature of responsibility in the risk of artificial intelligence technology and how to deal with that risk. Regarding the state of responsibility, there was a sense of discomfort in being able to hold manufacturers of artificial intelligence (which is highly black-boxed) liable on the same level as in the case of medical accidents or nuclear power plant damage, and there was an opinion that R&D activities would be suppressed if compensation for unpredictable damages could be claimed (Y. M., second-year student, Graduate School of Public Policy). As a way to deal with the possible risks posed by artificial intelligence technology, it was suggested that a regulatory committee be set up for each topic and that the general public be able to delegate decisions to that body, and that laws be developed through demonstration experiments that go beyond scenario planning (Y. T., first-year student, Graduate School of Interdisciplinary Information Studies), and a method to develop laws through demonstration experiments that go beyond scenario planning (G. L., first-year student, School of Public Policy).
(Editing/Writing: Takuya Mizukami and Haruka Maeda, teaching assistants)
The fourth class was taught by Prof. Junpei Komiyama of the Institute of Industrial Science. He talked about fairness in decision making using machine learning.
According to Dr. Komiyama, fairness arises in situations involving human beings, such as university entrance exams and employment. What is important here is that our perception of the world is linked to “sensitive attributes, ” such as gender and religion. When learning from data with such associations, the system will also learn the human perceptions inherent in the data. This is a problem that can occur even if the algorithm does not have a coded treatment for sensitive attributes. For example, when learning words from news articles, it is known that some words are associated with gender or race (e.g., occupation). Therefore, even a decision that does not explicitly use sensitive attributes may result in the same outcome as a decision made based on sensitive attributes. For example, basing a loan decision on information such as occupation may result in an extremely unfavorable decision for a particular race or gender. The more data we have, the more this problem becomes apparent.
There is the possibility of bias in decision making, whether by machine learning or by humans. According to Dr. Komiyama, statistical machine learning can eliminate bias by defining the viewpoint, and in this sense, it may even be “fairer” than humans, who tend to have intuition. Therefore, a future challenge is to design a system to define “fairness criteria” to make statistical machine learning fair. There is no clear academic answer to the fairness criterion, and social rules, such as social understanding and legal systems, need to provide the criteria.
In the latter half of the discussion, we discussed the circumstances under which decision making by artificial intelligence was acceptable, and the participating students’ positions varied. For example, H. C. from the Graduate School of Information Science and Technology stated that the criterion for whether or not it is acceptable to entrust decision making to artificial intelligence is the mental suffering of the person being evaluated by artificial intelligence. If there is a possibility of mental distress from decision making, there may be merit in having artificial intelligence replace it. On the other hand, there was also the opinion that leaving such decision making to artificial intelligence would lead to the neglect of serious problems that should be faced by humans; therefore, even if there is merit (regardless of whether it is fair or not), it should be considered by humans (S. Y., M1, Graduate School of Interdisciplinary Information Studies). K. K. from the Graduate School of Interdisciplinary Information Studies also suggested that the same information might be more convincing if it is conveyed by a humanoid robot rather than simply displayed on a computer screen, and pointed out that it is also important to know how to convey decisions by artificial intelligence.
In order for artificial intelligence decision making to permeate society, we will have to tackle not only the technological aspects, but also the social side of accepting it.
(Editing/Writing: Takuya Mizukami and Haruka Maeda, teaching assistants)
The fourth class was taught by Professor Osamu Sakura of the Interfaculty Initiative in Information Studies, who spoke about the relationship between artificial intelligence and society from the standpoint of the social theory of science, technology, and society.
The way in which technologies such as telephones and radios, which are widely used today, are used was not determined from the beginning of their development, but was formed in the process of being used by a large number of people. This way of thinking, in which society determines the way technology is used, is called the social formation of technology. For example, the advent of the washing machine made some washing processes easier, but it also increased the number of times the laundry had to be washed (dried, folded, etc.), which in turn increased the workload. Technology is a kind of “ecosystem,” and changes are difficult to predict.
We can see from Western automatons and Japanese karakuri dolls that mankind has long been searching for artificial intelligence and robot-like technology. We can also learn from science fiction (SF) works what exactly people were looking for in robots and what they found in them. Dr. Sakura said that people “see what they want to see there,” and the current “view of artificial intelligence” may have been formed in this way.
Finally, Dr. Sakura mentioned that values that emphasize the dichotomy between humans and nature are typical in the West (e.g., originating from Christianity), and suggested that “Eastern” values may be meaningful for the discourse on artificial intelligence, such as the Singularity. In the future, Dr. Sakura will continue to explore Eastern values related to artificial intelligence by comparing the joint gaze of mothers and children in ukiyoe paintings and the gaze of people in European religious paintings.
In the latter half of the session, students discussed how the elements of cultural differences should be taken into account when considering the relationship between artificial intelligence, robots, and society. Many students agreed on the importance of taking cultural differences into account. For example, as pointed out by T. K. (M1 of the Graduate School of Engineering), technologies born in countries with different cultures may not be directly importable, and may not be available at all in some cultures.
On the other hand, we also need to pay attention to the inner reality of the “culture” we refer to in doing so. For example, as pointed out by Y. T. (M1 student in the School of Interdisciplinary Information Studies), the West and Japan of the past may have had a completely different perspective from the West and Japan of today, and in this sense, how society changed in the past may be helpful. It is often pointed out that Japan’s values toward artificial intelligence and robots are unique, but as H. T. (M2 student at the School of Public Policy), pointed out, it is important to carefully consider whether this is due to Japan’s ancient culture or to the relatively new historical background of “defeat” and subsequent “pacifism.”
(Editing/Writing: Takuya Mizukami and Haruka Maeda, teaching assistants)
The sixth class was taught by Professor Hideaki Shiroyama of the Graduate School of Law and Politics and the Graduate School of Public Policy, who lectured on the themes of artificial intelligence and politics. Specifically, he discussed how to design institutions related to artificial intelligence and how to handle political decision making using artificial intelligence.
Technology assessment (TA) is the analysis and evaluation of the impact of scientific and technological developments on society, and the efforts developed here to organize social impacts, design institutions, and promote knowledge exchange can be applied to artificial intelligence technologies. In Japan, TA has not been institutionalized in places such as the Diet, the executive branch, the Science Council of Japan, and research and development institutions, but TA activities have been conducted. The Fifth Science and Technology Basic Plan also mentions the ethical and legal issues of science, technology, and TA. With regard to artificial intelligence in Japan, the “Round Table Conference on Artificial Intelligence and Human Society” of the Cabinet Office and study groups of various ministries are venues for TA activities, while research and development entities such as the RIKEN Center for Integrated Research on Innovative Intelligence and the JST Research and Development Center for Social Technology are engaged in such activities. Specific risk assessment issues include risk categorization, framing decisions in assessments, and feedback on policy.
In the second half of the session, the impact of artificial intelligence on social decision making was examined. The key question here is whether artificial intelligence decision making will present new challenges. This is because unexpected situations can arise even when decision making is delegated to humans. Professor Shiroyama suggested the possibility of sharing roles in each phase of social decision making. For example, the “ruthlessness” of artificial intelligence is often talked about in military and nursing contexts, but there are tasks that should be performed by emotionless artificial intelligence. The actual use of artificial intelligence for decision making requires dealing with embedded value judgments, biases, and liability issues. For specific risk management, it is important to balance the phenomenon of total risk with new risks and to set up a system that allows the necessary experiments for risk assessment.
In the latter half of the student discussion, we talked about how the discussion should focus on the future division of roles between humans and artificial intelligence technology. Specifically, what kind of discussions should be held, what kind of discussion forums should be set up, and where should they be fed back? As C. S. of the School of Public Policy pointed out, for example, simple and routine tasks such as accounting, industry, and agriculture may be entrusted to artificial intelligence, while areas that require new ideas such as technology development; pharmaceuticals; inventions; and areas that involve human emotions and ideas such as customer service, childcare, and communication may be left to artificial intelligence. On the other hand, in areas that require new ideas, such as technology development, pharmaceuticals, and inventions, and in fields that involve human emotions and ideas, such as customer service, childcare, and communication, there is still the possibility that only humans can do it. As a concrete example, they discussed whether artificial intelligence can be allowed to make decisions on tax rates. In the current system, these decisions are made by the Diet, but since only a minority of the people are familiar with the process, some argue that decision making by artificial intelligence will not make a difference. On the other hand, there was an opinion that it is important for the public to agree with the system in which Diet members are elected and debate and make decisions in the Diet (T. K., M1, Graduate School of Engineering). In determining the actual division of roles, it was pointed out that because the needs of experts and citizens in each field and the associated risks are different, it is first necessary to have discussions for each field in collaboration between those who create the system and those who conduct research and development (K. K., D3, Graduate School of Interdisciplinary Information Studies).
(Editing/Writing: Takuya Mizukami and Haruka Maeda, teaching assistants)
The seventh class was given by Prof. Tatsuya Harada of the Graduate School of Information Science and Technology, who gave a lecture on how machine learning understands the real world, using image recognition as an example under the theme of “Understanding the Real World with Machine Learning.”
With the development of machine learning methods such as deep neural networks, it has become possible to recognize images of dog breeds, which is difficult even for humans. One example is the task of inputting an image and a question about the image in natural language to a machine, which outputs an answer in natural language. Although this task has produced a reasonably good rate of correct answers, there are still some issues to be addressed, and these issues stem from the differences in the way humans and machines perceive the world. Humans estimate three-dimensional shapes from two-dimensional information when they look at the real world, and by developing differentiable renderers, it has become easier for computers to acquire this three-dimensional shape estimation capability. Another problem is the creation of the teacher data. For example, due to the polysemy nature of real data, there is ambiguity in the labels given by humans, which can have negative consequences when a computer learns. There are several ways to improve the accuracy of learning from a limited amount of teacher data, including adding the teacher data by combining different proportions of the data to be distinguished. An important application of image recognition is in the medical field, where it is expected to assist in image diagnosis, which is difficult even for humans (summary: T. K., M1, Graduate School of Engineering).
In the latter half of the session, students discussed what kind of algorithms, theories, and concepts are needed to give computers the ability to think in a generalized way from a small amount of data like humans do. Students suggested, for example, how to give the computer data that it can learn efficiently, such as captions, so that it can learn more efficiently by giving it information from different domains, and how to use the learned model to move back and forth between domains (Y. H., D3, computer science and engineering). On the other hand, there was an opinion that it would be difficult for a machine to have such functions on the same level as humans. This is because when humans answer a question, we think about the content of our own answer by estimating the level of abstraction and concreteness required in relation to others. This is due to a certain emotional aspect of human beings, such as concern for others, and this aspect is considered to be a unique human ability (N. K., M2, Graduate School of Public Policy). In the lecture, Harada also mentioned the method of learning with human preconceptions, but he also pointed out that in the real world, there are situations where rationality without emotions and preconceptions is required and situations where it is not, and that we need to consider the boundary between them when applying machine learning. In the real world, there are situations that require rationality without emotions, preconceptions, and situations that do not.
(Editing/Writing: Takuya Mizukami and Haruka Maeda, teaching assistants)
The eighth class was given by manga artist Kyuri Yamada, known for his work “AI no Idenshi (AI Gene),” and the theme of the lecture was “How to find the future as practiced by SF manga artists.” He said that he thinks about the future through his actual experiences and through various types of information, such as internet news. He said that manga artists cannot sell their works without gaining the understanding and sympathy of the public, so they need to include elements in their works that respond to the values, desires, and dreams of people who live in this world normally. In order to portray a world with distant values, he writes under the constraint that he must carefully explain why people have such values in order to gain sympathy. There are two ways of clearing this constraint and attracting sympathy for work. One is to place new technologies as “enemies,” as represented by the dystopian format. In dystopian works, the bad aspects of a new technology or system are emphasized, and people with old-fashioned values resist them, so that people of today can sympathize with them. The other is to depict things “half a step ahead,” based on the premise of today’s society and sentiment. One example that is often admired as being the best example of this in Mr. Yamada’s manga is the depiction of a car that is not covered by insurance if it is driven manually after being deactivated automatically (summary: H. K., M1, School of Public Policy).
In the second half of the class, the students were asked to think about “things that are considered good (allowed) now, but will eventually become bad (not allowed)” in the form of a story. In the short time available, the students seriously discussed what kind of story would be created in a world with surrogate mother robots or a perfect translation program, for example, and received high praise from Mr. Yamada. The discussion started with small groups of individuals sharing their ideas, but as T. T. of the School of Public Policy pointed out, the discussion could be deepened or broadened by mentioning the connection with SF works that dealt with themes similar to the shared ideas. In this sense, when thinking about and discussing topics related to artificial intelligence, it would be more interesting and fruitful to discuss in terms of the many SF works. As G. L. (M1, School of Public Policy) pointed out, the ability to think of specific scenarios that can gain sympathy, such as those required when drawing SF manga, is also important from the perspective of public policy and business. On the other hand, SF comics, which require public sympathy, reflect the impressions and thoughts that people have consciously or unconsciously about the future and new technologies, and by analyzing such works, we may be able to understand our expectations and concerns about artificial intelligence (N. K., M2, School of Public Policy).
(Editing/Writing: Takuya Mizukami, teaching assistant)
The ninth class was taught by Dr. Ken Imai of the Graduate School of Medicine, who lectured on the application of artificial intelligence technology from the standpoint of medical informatics. In the field of medicine, the aim has been to make big data usable and to develop and clinically apply medical artificial intelligence systems. The history of artificial intelligence medical applications began in the 1970s with applications such as antibiotic recommendation systems and internal medicine diagnosis support systems. In the 2000s, the era of big data began and artificial intelligence declined for a while. However, with the advent of deep learning in the 2010s, the medical application of artificial intelligence technology began in earnest. At present, the medical application of artificial intelligence technology in all fields, including diagnostic imaging, is being pursued.
The challenge in applying artificial intelligence technology to the medical field is that the scope of knowledge covered by deep learning is very narrow and lacks explanatory power. It is also necessary to have medical knowledge in a form that can be processed by engineering. In response to this, there is a movement to use ontology-based knowledge representation in combination with deep learning to overcome these issues and promote the application of medical artificial intelligence in various situations. As for the discussion about the relationship between medical care and artificial intelligence, it is mostly about diagnostic imaging, and it will be necessary to deepen the discussion in order to foster a consensus in society (summary: S. Y., M1, School of Interdisciplinary Information Studies).
In the student discussion in the second half of the class, we talked about how we should discuss the future division of roles between humans and artificial intelligence, what kind of forum we should set up, and where we should feed it back to. If medical artificial intelligence that is better than human doctors appears, how would the role of doctors change? For example, as O. N. from the School of Interdisciplinary Information Studies said, clinicians may play a stronger role as counselors to ensure humanistic medical treatment that incorporates patients’ opinions through dialog. In addition, the analysis of vast amounts of data using artificial intelligence will provide new insights into basic medical research. However, in this case, the “problem awareness” part, which can be said to be the start of research, must be carried out by humans, and the power of many research physicians should be invested in having problem awareness and thinking about the applicability of technology (O. N., M1, School of Interdisciplinary Information Studies).
One of the many comments in the discussion was the idea of operating medical artificial intelligence as a “second opinion” to supplement human doctors. However, since a second opinion is only meaningful if each doctor makes a diagnosis after making the greatest effort, there is a risk that having artificial intelligence make a diagnosis after a human doctor has performed a conventional medical examination would be a “double effort.” In addition, knowledge of artificial intelligence will be necessary to make responsible judgments about the results of artificial intelligence diagnoses (H. T., M2, School of Public Policy).
Considering the fact that medical practice requires a trust relationship between humans, there are difficulties that stand in the way of having medical artificial intelligence replace human doctors or using it for a “second opinion.” However, the streamlining of doctors’ work through the use of artificial intelligence in the medical field will allow doctors to focus on their relationship with patients and provide more effective treatment. In this light, we can consider that the development of artificial intelligence is not about completely replacing doctors, but about bringing about improvements in the efficiency and quality of healthcare (G. L., M1, School of Public Policy).
The 10th class was taught by Prof. Kaori Karasawa of the Graduate School of Humanities and Sociology, and the theme of the lecture was “On Reading the Mind of Artificial Intelligence.” Specifically, based on the findings of social psychology, she examined why we perceive artificial intelligence (and robots, etc.) as if they have a “mind.”
According to a study called the Mind Survey, the perception of mental functions consists of two dimensions: experience (“feeling” mind) and agency (“doing” mind), and these two dimensions can be used to map the “mind” that humans find in various objects. In addition, these perceptual dimensions of “mind” are related to moral factors and actions such as protection and assignment of responsibility to those with that “mind.”
Next, the professor discussed the issue of what happens when humans find a “heart” in artificial intelligence. Perceiving a “mind” in an object that does not have a mind of its own, such as artificial intelligence, has something in common with the phenomenon of anthropomorphism. Robots that have human-like features (including not only visual features such as facial elements, but also behavioral features such as the magnitude of uncertainty) can be easily anthropomorphized, and there are many practical examples such as Paro in elderly care facilities. Numerous empirical studies have shown that the same rules of judgment and behavior that we apply to humans also apply to robots that have found their “heart.” In other words, artificial intelligence and robots are not mere objects, but can be recognized as social agents that “have something like a mind” and can become partners in social interaction (summary: T. Y., M1, School of Public Policy).
In the student discussion in the second half of the class, based on the content of the first half, we discussed the need to develop robots and artificial intelligence that would facilitate mind perception. As C. S. from the School of Public Policy pointed out, there will be areas where such robots should be introduced and areas where they should not. For example, it would be inappropriate to use robots that promote mind perception in military situations that involve the destruction of robots. On the other hand, mind perception is necessary in areas such as medical care and service industries, where communication with people is necessary and human needs need to be satisfied. However, even in such cases, we must consider the problem of impairing the original human communication (C. S., M1, School of Public Policy).
Considering the fact that there are limits to the capabilities of robots and artificial intelligence, humans will still desire communication with humans, and communication with robots and artificial intelligence in their current state will only be able to supplement that solitude. Therefore, it is necessary to examine what is fundamentally different between communication with robots and artificial intelligence and communication with humans while conducting research on robots and artificial intelligence (K. K., D3, School of Interdisciplinary Information Studies). In addition, considering the fact that people are seeking more approval due to the advent of social networking sites, it is possible that we will become more “individualized” as communication with robots becomes more prevalent. Therefore, I think that rethinking the basic question of “what do we want to do with artificial intelligence?” will reduce or avoid the negative consequences that artificial intelligence can cause in society (A. E., M1, Graduate School of Interdisciplinary Information Studies).
(Editing/Writing: Takuya Mizukami, teaching assistant)
The 11th class was taught by Dr. Daishi Kawaguchi of the Graduate School of Public Policy and Economics, who gave a lecture on the impact of the introduction of technology on the labor market from the perspective of economics.
Dr. Kawaguchi first introduced Osborne Frey’s analysis in a previous study, and then introduced NRI’s analysis based on Frey’s work, stating that 49% of the jobs of the working population will be replaced by artificial intelligence and robots. However, there is a problem in that Osborne Frey’s analysis considers the impact of artificial intelligence on employment only for each occupation, and does not assume an impact on other occupations. This does not necessarily mean that labor substitution will be intense, as the introduction of a new technology can increase productivity and raise the demand for labor, or synergies can raise the demand for labor in other industries. In addition, the existence of a technology does not necessarily mean that the technology will be introduced, but rather that it will be introduced only when the cost of introducing it, such as the price and maintenance cost of the machines, becomes lower than the wages of managers and workers. In addition, it was pointed out in this research that it is superficial to read the phenomenon where the employment rate declines when more robots are introduced as a sign that labor substitution is progressing. Since the employment rate = working population/population, and one could hypothesize that the growth rate of the population is higher than the growth rate of the working population, we cannot necessarily conclude that it will have a negative impact on the labor market and population (summary: M. K., M1, School of Public Policy).
In the student discussion in the second half of the class, we talked about how local and national governments should respond to the impact of the introduction of new technologies, given that the impact varies depending on the actors involved.
In his lecture, Dr. Kawaguchi mentioned that there are differences between Japan and the US in the way artificial intelligence technology is introduced. When considering the impact of the introduction of artificial intelligence technology and taking countermeasures, it is important for each country to formulate a strategy that fully considers factors such as its industrial structure and labor market conditions (S. X., M1, Graduate School of Public Policy). In this light, Japan can consider the introduction of artificial intelligence technology as a solution to the declining birthrate and aging population. Specific options that can be taken by national and local governments include, for example, expanding the capabilities of the elderly, increasing the labor productivity of the young, and increasing the financial resources of the country (K. H., M1, School of Interdisciplinary Information Studies). However, on the other hand, as new technologies are introduced into our daily lives, there is the possibility that the abilities required of workers will change rapidly, and workers may not be able to maintain productivity with their traditional skill sets. It will be necessary to review the conventional education system from primary and secondary education to university education in order to develop qualities and abilities suited to a society facing the so-called Fourth Industrial Revolution (L. N., M1, School of Public Policy).
(Editing/Writing: Takuya Mizukami, teaching assistant)
The 12th class was taught by Professor Noriyuki Yanagawa of the Graduate School of Economics, who talked about two issues in Japan’s artificial intelligence technology: the challenges of social implementation and the challenges of human resource development.
As for the issue of social implementation, there are issues with the policy-making process and the degree of intervention by the government. With regard to the former, there is no single ministry in charge of artificial intelligence, and policies are decided from the bottom up by each ministry. As a result, in addition to discussions within ministries, cross-ministry discussions are repeated, which lacks mobility and makes it difficult to present an overall vision. On the other hand, it is possible to create policies that are in line with what is happening on the ground. Regarding the latter, the division of roles between the private sector and the national government has not yet been decided and is still under discussion. Some say that the government should approve the rules proposed by the private sector, while others say that the government must have rules for international discussions among nations.
As for the issue of human resource development, although there is a national plan to develop 250,000 artificial intelligence human resources, there is a problem in that there is no definition of what constitutes an artificial intelligence human resource. It is not necessary for everyone to become a data scientist, but it is expected that there will be a need for people who can maintain the learning data for machine learning. The professor’s view was that the education system itself will not shift drastically toward artificial intelligence, because the maintenance of machine learning training data requires expertise in each genre (summary: T. K., M1, Graduate School of Engineering).
In the second half of the class, we discussed the issue of how much governments should intervene in the future penetration of artificial intelligence technology. As T. L. of the School of Public Policy pointed out, when rules are made from the bottom up, there are problems such as whether they are really appropriate and whether the private sector will abide by them. To avoid this, government-led rule-making is a valid option. However, there are also concerns about whether non-specialists who are not familiar with technology can create appropriate rules. This problem may be solved to some extent by hiring information technology students when hiring new graduates, or by promoting artificial intelligence personnel from the private sector (T. L., M1, Graduate School of Public Policy).
For the development of artificial intelligence human resources, it is possible to have faculty members in information science cooperate and teach artificial intelligence classes in other departments, but there are problems with adjusting the difficulty level of the classes and the lack of human resources. In this regard, we can consider using online courses such as Coursera’s machine learning course and Aidemy’s programming course (M. H., M2, Graduate School of Interdisciplinary Information Studies). However, as we have learned in previous classes, it is also important to be able to avoid inappropriate use of artificial intelligence technologies and to respond appropriately when problems arise. In this sense, I believe that “artificial intelligence human resource development” is required not only to acquire superficial skills such as programming and the ability to use machine learning libraries, but also to understand how artificial intelligence works and the meaning of the original data (O. N., M1, Graduate School of Interdisciplinary Information Studies).
(Editing/Writing: Takuya Mizukami, teaching assistant)