2021 Schedule
Lecture Report
The first lecture was moderated by Prof. Ema, and a roundtable discussion was held on “Thinking about a society permeated by artificial intelligence” as an introduction by the faculty member in charge.
First, Prof. Sakura presented the framework and results of an investigation into whether humans perceive the results of a suspected discriminatory decision made via artificial intelligence (AI) as actually discriminating. Ms. Maeda, a student of the Sakura Lab and TA of this class, conducted this survey on her initiative. Prof. Sakura and Ms. Maeda—who have been studying discrimination by AI—thought that humans would also feel that these allegedly discriminatory judgments were discriminatory, though the discussion offered a different perspective.
Prof. Kuniyoshi pointed out that many factors come into play in the phenomenon of discrimination by AI from an engineering perspective. For example, the phenomenon of the inability to make accurate judgments after the official release can be viewed as just a bug that needs to be corrected. There are numerous possibilities for what is happening in the laboratory before the release of a product. Notably, there was an opinion that the critical factor for discriminatory AI results is human intention.
Prof. Shiroyama agreed with the importance of intention; however, his premise differed from that of Prof. Kuniyoshi. He pointed out that if a bug occurs in a product after its release, it is likely that prior testing was inadequate. In other words, the fact that the product version was released with inadequate preliminary testing could be interpreted as an intentional act of “leaving the bug untested”.
Prof. Sakura then mentioned that the case here is in the news as a real case. He then described the difference between being given a case that may be considered discriminatory in the survey form and the feeling of hearing about it in the news. Both are phenomena that are communicated with some degree of detachment from their real context, yet where they are detached varies considerably. He claimed the difference between the two as the former is eliminating the fact that there is a person (i.e., an intention) behind it.
Human perception is not monolithic. What one perceives as discrimination, and then what one considers important in perceiving discrimination, will also differ from person to person. Incidentally, it is interesting to note that the intentions listed above are not considered essential requirements in philosophical discrimination theory. Therefore, while one purpose of this survey was to learn about the differences in perception, we also got a sense of the differences in perception from the words of the professors here, not surprisingly.
Finally, each Prof. made comments on what they wanted the students to learn through the class. Prof. Kuniyoshi, after stating that the class would be a lively discussion, stated it is important for science and engineering students to think about social and ethical aspects and expects active participation. He also pointed out the significance for those who are not in the information field to discuss AI based on their knowledge of its technical mechanisms and the kinds of applications that are emerging. In other words, emphasizing what AI can do and what it cannot do is important. Prof. Sakura commented that in this day and age, there are no barriers between social science and natural science fields. Hence, making efforts to expand knowledge without assuming covering your unfamiliar research fields is unnecessary. Regarding the diversity of demographics constituting the class, Prof. Shiroyama commented that he hopes the students will enjoy its diversity as well as the variety of instructors and enjoy discussing problems that arise due to the emergence of new technologies while fundamental causes of the issue in society are persistent. He hopes the students will see the transformation of these issues. Finally, Prof. Ema concluded the discussion by encouraging the students to expand their discussions to their families and friends, rather than keeping them within the classroom, based on what they have learned from the various themes and discussions they have prepared for the class.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The second lecture was given by Prof. Sakura of the Interfaculty Initiative in Information Studies. The title of his lecture was “Are robots friends or foes?”.
The two reading materials for this lecture dealt with Eastern and Western differences in attitudes toward robots and the mother-child relationship (The paper focused on the phenomenon of co-viewing, in which mother and child look in the same direction). It was difficult to imagine how the papers would relate to the class. However, it was interesting to learn that there are differences in the characteristics of images—such as whether or not they assume the world outside the frame—and impact our perception of robots. During the discussion, I shared the idea that the differences between Eastern and Western ideals of robots may be influenced by the actual differences between the East and the West. I thought that social aspects, such as how much robots are used in the society to which a person belongs, may be strongly involved in the formation of the ideal toward robots. (Summary and impressions modified and quoted from Student A of the Graduate School of Information Science and Technology)
During the lecture, Student A felt that there might be other factors at play. Student C of the Graduate School of Public Policy said that he was not convinced that the phenomenon of “co-sighting”: a behavior only observed between human parents and their children, should be immediately applied to human-robot relationships only because it is also seen in images of robots and humans in Japan. Student H of the Graduate School of Interdisciplinary Information Studies points out that rather than cultural differences in the acceptance of robots in general, there may be cultural differences in the form of robots as outputs. In contrast to typical robots in Japan, increasingly popular Boston Dynamics’ Atlas and Spot in the US do not have heads that can physically function co-sighting in the first place. Others saw another significance in the fact that many compositions in Europe and the US look at robots head-on. Mr. I from the School of Interdisciplinary Information Studies interpreted that rather than seeing robots as partners, they might be monitoring them to see whether they are doing anything strange.
The second part of the class was discussing the conflict between individual and public health. It was suggested that the COVID-19 measures are totalitarian and incompatible with individualism and that perhaps now might be a great opportunity for those with ambitions to make major changes in society. This made them realize once again that this pandemic is a critical phase for human society and human history (Above summary and impressions by Student A).
The conflict between the individual and society is particularly eminent towards the perception of freedom, but Student C of the School of Public Policy points out that anxiety about information leaks and distrust of the government can also be cited. Another barrier is the fear of entrusting personal information to unknown AI entities as well as robots.
In addition, Student H of the Interdisciplinary Information Studies Department thought that the balance between public welfare and the individual is culturally different. Even in China, where various surveillance methods have been introduced, and in Japan, the sense of uneasiness about public surveillance must be completely different.
Student I, from the School of Interdisciplinary Information Studies, thought that the fields in which AI would be used were so diverse that there was no other way but to explain to the public and make rules as we went along.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The third lecture was given by Prof.Yasuo Kuniyoshi of the Graduate School of Information Science and Technology. He lectured on the theme of “The Future of Artificial Intelligence and Humans and Society.”
First, he explained the basic mechanisms of neural networks and deep learning, which are varieties of AI; then, he presented examples of AI-based products such as IBM Watson and discussed ethical issues and future challenges facing AI development. Concerning AI, there are two positions: “strong AI” and “weak AI.” The former believes that AI should have a human-like mind to the content, while the latter believes that it only needs to be able to perform limited tasks (e.g., chess) with machine learning, and whether it has a mind or not is irrelevant. The philosopher Searle, using the example of the Chinese room, pointed out that machines, unlike humans, do not understand their actions. Combining functional elements does not result in a human-like brain and mind, and there has been some controversy about this understanding. The problems with current AI were discussed, including the fact that it learns from real-world historical data and therefore reproduces real-world sexism and racism. Moreover, it cannot respond to unexpected scenarios and also ignores human intentions and actions that may be considered insane from a human perspective. Therefore, it is important to give forms to the input information known as Embodiment. One example to give robots a mind similar to humans is learning from human fetal development and applying it to the robots. (Summary by Student D, Interfaculty Initiative in Information Studies, Interdisciplinary Graduate School of Information Studies)
In the lecture, after explaining AI, students commented on Prof. Kuniyoshi’s opinion that robots should also have a heart.
Student O, a graduate student in the Graduate School of Information Science and Technology, said that he now partially agrees with this opinion. However, its scope of mind is only insofar as it can explain the relationship between the output and input of information. He stated that it may be difficult to explain the complete mimicry because, on the one hand, it is similar to numerical processing, and on the other hand, it may feel that the “mind” arises as a result of fluctuation of excitation thresholds caused by the secretion of chemical substances (adrenaline, etc.). This is in comparison to biological/animal perspectives after defining the scope of what the mind has.
Student O of the Graduate School of Public Policy wondered why it is not common sense but the heart that counts. She wonders whether humans want AI to be resourceful in practical society in the first place, whether a human-centered AI society is feasible even if AI has a human-like mind, and whether humanity is reliable in the first place. She states that careful discussion on whether AI should have a mind or not should be held responsibly in the future. She wanted to know what other students think about it as it is a divisive topic.
Student M of the Graduate School of Information Science and Technology says that even if AI has a mind, if the surrounding environment translates to biased and evil data will pose a different danger than an AI without a mind. Similar to the developmental process of human intelligence, the danger and risk of unpleasing human imitation would be higher with an AI since it is easier to control the surrounding environment. He said that we may not be able to say which is more dangerous: AI without a human mind or AI with a wrong mind. Yet, some regulations may be necessary for the future because ethics on the part of humans who develop AI are very important.
Certainly, cultivating and building awareness of human conscience and ethics is necessary for AI development. Despite O from the School of Public Policy pointing out the human mind as evil, implementing such practices will avoid AI mimicking it and will be resolved.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The third lecture was given by Ms. Hioki, an attorney, on the topic of “legal issues related to artificial intelligence”.
In this class, there was an introduction to recent policy and legislative trends regarding AI and corporate practices for social implementation of AI. She introduced policy and legislative trends related to AI by region. For example, in Europe, under the principle of “human-centered AI,” reports on AI ethical principles for establishing a framework for ethical aspects of AI, robotics, and related technologies; a civil liability regime for AI; and intellectual property of AI have been published. Accordingly, this April, the “Draft AI Regulation” was released. These are classified into four categories based on the magnitude and importance of the risk, taking into account the impact on human life and fundamental rights, and require measures according to each category. The US has not made any move on enacting laws and regulations at the federal level, and there are several published documents under consideration by each technology and public agency. In China, the government has been promoting human resource development, strengthening basic research, and rapidly industrializing research results to become the world’s most advanced AI country by around 2030 since “China Manufacturing 2025” in 2015. Furthermore, at the same time, a certain level of consideration for AI ethics, such as the publication of the “Next Generation AI Governance Principles,” has been observed. The abovementioned trends in other countries show the differences in technological capabilities, industrial competitiveness, and culture regarding AI in each country, the commenter said. In the case of Japan, the relevant ministries and agencies have published guidelines and other information under the “principles for a human-centered AI society”. For example, there are “AI Utilization Guidelines” by the Ministry of Internal Affairs and Communications (MIC) for developers and businesses, and the “AI Utilization Handbook” for consumers, which presents checkpoints for using by type of service utilized. In addition, the Ministry of Economy, Trade, and Industry (METI) have published the “contract guidelines for AI and data utilization” regarding intellectual property and contracts during development and business implementation. The contract format has changed and is now using a step-by-step contract format, among other features. As for other Japanese efforts, she shared that Japan is trying to make up the rules by lobbying the Organization for Economic Co-operation and Development (OECD) and G20 and through bilateral and regional dialogues.
Next, she explained corporate practices related to the social implementation of AI. The main entities involved were summarized: the vendors who develop the technology, the providers who offer services using the technology (often the two are integrated), and the users of the services. It was assumed that research, development, implementation, and utilization of AI technologies in services and products are promoted by these entities, and that data on the use of services and products are used to improve AI technologies and the services in which they are implemented. In the series of social implementations of AI, she introduced approaches and current practices for solving issues such as handling of rights, deterrence, response to troubles and incidents, and distribution of responsibility. Scuh approaches and practices are implemented by enacting soft law, and handling between parties by contract. As an example of a corporate trend, Facebook’s policy was mentioned as an example of an initiative by a single company that requires users to provide information and make decisions. She explained a clause in which the company gives users a choice in the use of the technology for some services and uses, assuming that the data used for facial recognition functions are protected under the laws of the user’s country. At the end of the class, specific examples were discussed on effective approaches for avoiding and responding to real world problems as further social implementation of AI occurs. (Writing Credit: L, Graduate School of Information Science and Technology)
Will the countermeasures work in the first place? Student M from the Graduate School of Engineering raises a question. He states that it may be difficult to regulate a supranational company (e.g., Google) in the US, which places a particular emphasis on growth because each country has its way of promoting industrial development. Therefore, there seemed to be an argument to put another actor between the providers and users to avoid the users directly suing the providers.
Besides providers and users, engineers or designers could be considered important actors. Student S of the Interdisciplinary Information Studies Department believes as an engineer that it is difficult to guarantee perpetual safety for what is becoming an over-technology though AI implementation should be based on ethics and does not cause issues afterward in an ideal world. She claimed that providers need to take some responsibilities, yet end users need to become more AI literate as well.
Similarly, Student N of the Graduate School of Arts and Sciences claims that an ethical education for users is necessary before preventing individual problems. School education is insufficient to keep up with ever-evolving technology. Citizens must proactively express their opinions as ethical issues cannot be judged by the law. Only by educating people to judge these issues will they be able to decide whether or not to use the technology.
Even if we take a single service and discuss it, various actors are involved. Users, who only “lightly” use the service, are now becoming more involved and naturally increasing its presence towards the engineers as well.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The fifth lecture was given by Prof.Koizumi, who is part of the University of Tokyo’s Collaborative Community Design Lab and Research Center for Advanced Science and Technology. His lecture was titled “Considering Smart Cities and Regions”.
Students were able to deepen their understanding on the definition of smart city, the relevant efforts by various countries, the future of smart cities, and the value of space and cities. We learned that the development of information and communication technology (ICT) in recent years have materialized the concept of a “smart city.” This particular idea is centered on AI and the Internet of Things (IoT) as well as the current concentration of human resources, from improving the efficiency of government administration to the introduction of efficient energy systems, etc. Existing efforts are underway in cities around the world—from New York to Lyon and Toronto (with a few setbacks)—to improve urban infrastructure, utilize various big data in the city, and plan and implement projects for public-private co-creation. In Japan, there are also projects such as the development of community watch cameras, Mobility as a Service (MaaS), Toyota’s Woven City, and Kashiwa-no-ha. In general, smart cities are expected to improve the safety and convenience of citizens’ lives and create further innovation through diverse and open cultures and people. In the latter half of the class, the concept of the rural city was introduced. The rural city is where the place to live and work are separated; specifically, people work in the city center while living in the suburbs where one can enjoy the richness of nature. Although there are concerns that virtual reality (VR) will reduce the value of the physical city, I thought the example of “Virtual Shibuya,” where VR is incorporated into each urban space as an added value and utilized in the real space, was very interesting. (Writing Credit: Student Y, Public Policy)
When such efforts are underway globally, differences in political systems and cultures can affect its acceptance. Student M of the School of Public Policy points out that in Japan, there is a distrust of government IT literacy in addition to government politics itself. While there is expected to be a large backlash at the time of introduction. As a potential outcome, he believes that it would be possible to smoothly introduce the system in a non-democratic country such as China. Notably, autocratic countries such as China would have an advantage in development due to the richness of data possessed by the government.
Some pointed out that IT-based urban planning is not widespread in Japan. IT and urban planning in Japan are described as “water and oil” by Student M of the Graduate School of Information Science and Technology. While Student M of the Graduate School of Public Policy cited a distrust of IT literacy in the government, Student M of the Graduate School of Information Science and Technology said that the Japanese themselves may have an excessive distrust of IT in the first place. He also says that most Japanese people are unfamiliar with the smart city concept, hence creating an even greater sense of fear.
If IT is used in urban planning, what will be its impact? Student T of the Interdisciplinary Graduate School of Informatics voices that the people who live in a city today along with its history construct the individuality of a city. Smart city planning would rather make each specific city homogeneous. He is also concerned and hopes that material aspects of the city such as streets and roads can be used as guideposts for the development of the city if the city itself is used as the foundation of smart city projects.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The sixth lecture was given by Prof. Shiroyama of the University of Tokyo Graduate School for Law and Politics and School of Public Policy. His presentation covered topics of AI and politics.
In the opening half of the lecture, he explained technology assessment (TA) and the institutional design and implementation of technology based on TA regarding the politics of AI. He touched on the overview of TA in Japan and the issue of risk management. For example, he mentioned the lack of a clear process for determining the driving conditions when designing operational and limited areas for automated vehicles and the challenges of transition management. The participants also compared institutional designs in other countries and identified commonalities and differences among them.
In the second half of his presentation, he discussed the impact of AI on social decision-making and how to deal with it. In the use of AI in decision-making, it is desirable to incorporate AI based on the characteristics of both parties, such as whether humans or AI is more prudent and dispassionate. We should also be aware of emerging issues such as privacy and data bias. At the end of the class, there was a discussion about the roles of various institutions as AI governance actors and the introduction of AI into public services. (Writing Credit: Student U, Interfaculty Initiative in Information Studies and Graduate School of Interdisciplinary Information Studies)
Student Y, from the School of Public Policy, described what AI can do in public service. She believes that there is ample room for AI to intervene—even at this stage—in tasks that do not involve the use of human personal information and merely substitute for roles that were previously performed by humans. On the other hand, she feels that services related to the handling of personal information should be subject to greater scrutiny upon introduction. Especially, information connected to personal “beliefs” could become important. Therefore, regulation is necessary for AI for handling personal information. Although, she insisted that there is still much room for improvement for regulations while the progress is being made considering what she learned in the previous lectures.
Student T of the Graduate School of Information Science and Technology gave an example of benefits and what should be entrusted to AI. The following is a summary of the results of the survey. Furthermore, in terms of interpersonal services, it is difficult to fully adopt AI for interpersonal services at this point, since AI is not good at dealing with diversity.
Finally, Student T from the School of Interdisciplinary Information Studies stated that this lecture dealt with an exceptionally complex and difficult issue. In this regard, he said in terms of risk assessment, he was astonished to find out that the Ministry of Land, Infrastructure, Transport, and Tourism’s safety technology guideline, “aiming to realize a society with zero human injuries” is almost equivalent to saying nothing depending on the interpretation. The field of “AI” is so broad that it is necessary to consider the meta-dimension of who is responsible for governance, but only a limited number of people have the materials and methodologies to think across multiple fields of expertise. That is why he felt that the key is how well we can separate the issues and think about them.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The seventh lecture was given by Prof. Nakano of the Institute of Industrial Science on the topic of the social implementation of automated driving.
This lecture dealt with the current status of technology development related to automated driving and the ethical issues associated with its social implementation. In the first half of the lecture, the participants learned that of the five levels of automated driving, practical application is currently limited to Level 3 and not all evaluation criteria has achieved Level 3 benchmark. For example, humans must understand the characteristics of the machine before driving when dealing with low-level automated vehicles. Level 4 automated driving technology is expected to solve the problem of driver shortages such as enabling trucks to drive in formation. In the latter half of the seminar, we discussed ethical issues related to automated driving. Notably, while the high rate of accidents involving the elderly has been pointed out as a social problem, features such as automatic braking functions currently installed in many cars have certainly reduced the number of traffic accidents. As an extension of this discussion, it was pointed out that while the improved performance of machines including automated driving is expected to significantly reduce the accident rate compared to human driving. However, whether automated driving is socially acceptable is another matter. In addition, we learned that there are other hurdles to the introduction of automated driving beside the technology. One such issue is the fact that it is not yet clear where the responsibility lies in the event of an accident involving an automated vehicle as well as the need to create safety standards by weighing the risks and social benefits of introducing automated driving. (Writing Summary: Student A of the Graduate School of Information Science and Technology)
Participants commented on various methods to promote the acceptance of automated driving. For example, Student M of the Interdisciplinary Graduate School of Informatics pointed out that risk assessment and risk management introduced in the previous class may be useful in promoting the acceptance of automated driving. Clarifying what level of risk people are willing to accept to promote acceptance is important. At the same time, we also need to comprehend the “far-reaching” (periphery) impacts of technology adoption. Automated driving may indeed lower accident rates, but people’s reliance on automated driving and the waste of resources, such as electricity, could arise as new problems.
What are “far-reaching” (periphery) impacts for companies that are the service provider and producers? Student K of the Interdisciplinary Graduate School of Interdisciplinary Information Studies thinks that automated driving may lose future customers who love cars because they do not feel like they are “driving”. He states that it is becoming difficult for manufacturers to strike a balance as the situation is constantly shifting with the introduction of other new safety devices around the world.
Student K from the School of Public Policy suggested that public transportation in rural areas should be used as a stepping stone. He commented that public transportation that runs on fixed course would be optimal for tryouts if Level 4 of automated driving translates to automated driving without transfer of authority within a specific area. The demand for automated driving is greater in rural areas and the risk of accidents is lower. Moreover, monitoring is easier since the users themselves are the local authority, and it is easier to assign responsibility in testing the technology.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The 8th lecture was given by Prof.Kaori Hayashi of the Interfaculty Initiative in Information Studies on the topic of realizing a gender-equal society in the AI Age.
First, the class opened with an introduction to concerns surrounding AI and gender. There are issues such as discrimination against women from AI recruitment. Those issues tend to emerge from data distortion; deep fakes used to sexually consume women; the risk of unemployment for women engaged in low-wage work represented in part-time work market; and the imposition of sexual stereotypes on women. Addressing these issues requires a multifaceted approach from multiple levels: individual, organizational, governmental, and transnational. For example, government-level actions include the “Guidelines for AI Utilization” by the MIC, AI Network Society Promotion Council, and the Cabinet Office’s “Fifth Basic Plan for Gender Equality.” Yet, even with all of these efforts, the MIC guidelines in particular lack a gender perspective. In addition, the issue of masculinity, which has been embedded in the development process of science over many years, remains.
Because AI reflects human society, it must also confront the issue of stereotypes and prejudices that remain deeply rooted in the human mind. Some surveys have shown that while gender awareness is low in Japan by global standards, many people are optimistic about the use of AI. Therefore, discriminations will be amplified and technological development will proceed without clarifying where the problems lie if the use of AI is promoted while ignoring the social issues that exist at present. Hence, a multifaceted approach will be required involving education, media, and research. (Writing Credit: Student N, School of Public Policy)
The discussion was followed by a lively Q&A session on empowerment. Student I of the Interfaculty Initiative in Information Studies and Graduate School of Interdisciplinary Information Studies is aware of the issue of whether to allow AI to evaluate whether a student passed or failed a test based on the student’s gender, educational background, and other relevant factors. Since the gender aspect was not raised much in the discussion—which, in itself, is a problem—the discussion flowed along the lines of “AI may eventually penetrate in the future, but we will not introduce it at present”. He added that there were more ways to discuss the issue. Assuming AI is a “mirror of our society”, a solution to correct both worlds is to establish a rule mandating gender proportion should be equal in all organizations. He expressed that while there may be significant backlash, both quality and quantity of gender diversity should be changed since most principles and norms (including the values surrounding journalism) are based on toxic masculinity.
Student L, from the Interfaculty Initiative in Information Studies and Interdisciplinary Information Studies, discussed the comparison with China. Empowerment is more prevalent in Japan than in China, and Ms. L was concerned about the current situation in China. She conveyed the key approaches are facing the issue of sexual discrimination to overcome the current situation where women are discriminated against; those supporting minorities (in this case, women) does not imply discrimination against men. Clearly stating the existence of the problem and calling society’s attention to it is vital to achieve diversity in AI and society facing the issue of discrimination as a minority group.
Student Y, a graduate student in the School of Public Policy, felt that this topic was new and an issue that is easily overlooked in a male-dominated industry. She learned a lot from Prof. Hayashi’s explanation on AI’s extraction of discriminatory views of women from society. She insisted that AI sexism was formed from real human life and we must reflect on it in our daily lives.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
In the 9th class, Prof. Ohashi gave a lecture on the topic of competition policy perspectives on digital platforms.
In this lecture, an introduction was given on what advantages and problems are currently arising from a competition policy perspective regarding digital platforms. To examine this, he first introduced the theory of industrial organization. This field of study clarifies how large companies came into being, organized themselves, and came to dominate the economy; specifically, it is an essential perspective for examining digital platforms in this lecture. First, an explanation was given as to why the digital platforms came to be so large. According to Prof. Ohashi, the digital platform is growing in size due to the phenomenon of a synergistic increase in the number of users and stores (called the indirect network effect). He also pointed out that the size of the network must exceed a certain threshold for a network to survive. Therefore, various activities such as predatory pricing tend to be observed in the early stages of the digital platforms. He introduced the concepts of lock-in and excess inertia, using computer keyboards as examples of phenomena that are likely to occur in a monopolistic market with these digital platform characteristics. Such monopolies have the advantage of adding value and increasing efficiency through the linkage (aggregation) of data; yet, at the same time, they have the disadvantage that the information may be used only for commercial purposes and may lead to the invasion of privacy. Based on the above, at the end of this lecture, we discussed various regulatory methods to minimize the demerits of the digital platform without destroying innovation. (Writing Credit: Student M, Graduate School of Information Science and Technology)
Student Y of the School of Public Policy, while acknowledging its effectiveness as a means of platform regulation, pointed out the problems with co-regulation: an effort to leave the details of regulation to the self-regulation of the operators. For example, if a platformer and the government collude, it would be difficult for a third party to monitor or deter them because of the centralized nature of both. “Platform and government ethics” is an essential condition for co-regulation to be feasible. However, how to guarantee their “ethics” is a question that should be assessed in the future especially for platforms that hold personal information.
Student A, a graduate student in the Graduate School of Information Science and Technology, said that he had previously recognized many of the benefits of digital platforms, but now he was beginning to see the bad points, such as information exploitation by digital platform companies, discrimination in stores, and imitation of products. Student A’s opinion in the discussion was that digital platforms that exceed a certain size should be publicly operated, but other members suggested that pressure from the media and checks and balances with other companies would draw out digital platform companies to moderate their efforts. Others wrote that it might be effective if there was a device to expand the voice of the general public and put it against the companies, without relying on either the digital platforms or the government.
Finally, Student T of the Interfaculty Initiative in Information Studies and Interdisciplinary Information Studies pointed out that IT platform companies have the face of revolutionaries. They are expanding digital platforms and collecting data, consequently increasing their control over the economy and politics (e.g., indirectly influencing election results). It would be difficult to see what the problem is with the digital platform becoming too large itself. Hence, we cannot regulate legitimate competition simply because it has become too large. The unity of citizens proposed in the class is hope, but unity may not be easy for citizens who have difficulty understanding what they are being exploited for.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The 12th lecture was given by Prof. Tanaka of the Research Center for Advanced Science and Technology. She explained “what AI is not good at from a language perspective.”
There are various subgenres of linguistics. Some are myopic, such as phonology, which focuses on phonemes; some are macroscopic, such as typology, which focuses on corpora. Structuralism in linguistics, as represented by Saussure, viewed language as a holistic system and insisted that the whole, with its organic connections, rather than individual terms, should be the starting point for elucidating the linguistic system. In particular, studies in the field of language within the framework of complex systems science which targets systems in which it is difficult to predict and understand the behavior of the whole from the individual elements, and in which several laws have been found that largely explain the properties of lexical openness and the phenomenon of clumping of series (Zipf rule and Taylor rule). Ten years ago, natural language processing was a major trend in linguistics, and the accumulated knowledge in linguistics contributed greatly to NLP at that time. However, the nature of NLP’s methods and obstacles today is a clear break from the natural language processing of a decade ago. Language model building is beginning to reach areas where AI has traditionally been considered unskilled. This includes models built using advanced deep learning that satisfies several rules. (Summary by Student T, School of Public Policy)
Many participants mentioned the question, “If the university found Zipf and Taylor rules are satisfied, can we say that sentences generated by AI such as deep learning are close to natural language?”. The question discussed in Student T’s group at the Graduate School of Interdisciplinary Information Studies was how to make sentences closer to human sentences. One suggestion was to incorporate elements such as speaking style, speed, and rhythm that would be coordinated with people. This may be a trait produced by the empathy that humans possess. Another suggestion was that evaluating mistakes would bring us closer to natural language. Making AI learn from our mistakes could produce more human-like “less perfect” sentences. One student even suggested that the necessary elements might vary depending on what kind of writing is the goal.
Student S, from the Interdisciplinary School of Informatics, develops this argument through his comments. For example, in the entertainment field, while there seems to be no problem if it can move people’s hearts, problematic text with flaws will need to be fully verified and tested after generation. However, if the error rate can be minimized with this error detection included, it would be possible to almost replace human writing. A further question is how the value of the text will change after it is replaced. Will we feel the same emotions in AI-generated texts as we do in human texts? If we cannot distinguish between the two, he points out, then the value and emotional trust of writing for people will gradually decline.
Finally, we would like to point out a suggestion by Student M of the Graduate School of Information Science and Technology that the black box problem in language may be even more serious than that in image recognition. Language is the primary means by which we shape our thought processes, communicate what is in our minds to the outside world, and take in information from the outside world. Therefore, the impact may be greater than with image processing if something goes wrong with the language processing system. He pointed out the concern that issues in NLP may not be comprehended since many things about language are unexplained and unexplainable—even to the people using the language—may eventually lead to irreparable damage.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)
The 11th lecture was given by Prof. Arisa Ema of the Institute for Future Initiatives at the University of Tokyo, and was titled “Challenges of AI Society and How to Draw a Future Vision.”
The lecture was divided into three parts, explaining issues in the relationship between AI & society and a vision of a future society with AI utilization.
In the first part of the presentation, problems with AI were introduced. Examples include the tendancy of AI to reproduce discrimination and prejudice, and the difficulty in correcting the biases in the data behind such discrimination and prejudice. To eliminate such issues, systems need to be designed to ensure fairness and equality in society. She also mentioned that there is room for discussion on the position of AI in the decision-making process in society, just as there are differences in whether machine judgment or human judgment is more important depending on the type of sport.
In the second part, she noted that there are geographical differences in perceptions and visions when science and technology are accepted by society and that the social changes brought about by AI involve trade-offs on the topic of diversifying ways of working and living. In addition, She insisted that it is up to task managers to make the most of the machine-improved efficiency for replacing tasks in the working environment.
In the final section of the class, we first discussed scenarios of future societies, using science fiction works and images created by various organizations as examples. We also reviewed the need to consider specifics in constructing a vision, including how it might be achieved. She pointed out It is difficult to predict the impact of diffusion of technology, while at the same time it is difficult to control the diffusion process known as the “Collingridge dilemma”. The importance of society and individuals having a vision and responsibility was discussed. (Writing Credit: Student U, Interfaculty Initiative in Information Studies and Graduate School of Interdisciplinary Information Studies)
Student E of the School of Public Policy referred to a tweet by English philosopher Luciano Florida, who saw a video about Society 5.0 by the Japanese government and described it as “déjà vu”. He reminded the audience that no matter how advanced AI is, it tends to incorporate existing Japanese values and stereotypes, and lacks the perspective of future citizens. There are many aspects that humans need to consider beforehand. Student E wrote policy makers must be careful about AI and machines becoming goals rather than the process to achieve greater good. Mere users must be aware of this daily from this point forward. This opinion is based on the professor’s argument that “what should be done by humans remains to be done by humans.”
Including the déjà vu mentioned by Prof.Florida, Student S of the Interfaculty Initiative in Information Studies and Interdisciplinary Information Studies points out that we must consider how we can create strategies using Japan’s originality as a strength rather than following other leading countries. The idea of replacing tasks rather than jobs makes a lot of sense and he believes that the government should lead creating guidelines and indicators for each job type and industry based on this idea. Those guidelines should be then presented to all parties concerned for accelerating early adoption of AI.
In the context of nations, some students focused on democracy. Student M of the Graduate School of Information Science and Technology mentioned the need to take the initiative. Student M indicated that AI is something that can be used in various ways by different people. For solely reaping the benefits, we must avoid a future where certain powerful people will enjoy the benefits of AI exclusively. She also feels that even the current situation is dystopian in Japan. This is because the emphasis is only on the goal, and the principles of why things are done the way they are and why they are being done in this particular way are being disregarded. That is why she has decided to keep the principle of “why” in mind at all times.
(Writing Credit: Haruka Maeda and Chihiro Nishi, Teaching Assistant)