2017 Scuedule
Lecture Report
The lecturer for the first class was Prof. Kuniyoshi, who is engaged in research on the emergence and development of cognition based on physicality and humanoid robots. First, he explained the mechanism of deep learning, which is an artificial intelligence technology. For example, in deep learning for object recognition, a large number of pairs (e.g., 1 million pairs) of photos prepared by a person and the names of the objects in the photos are trained to the machine. Then, when the machine is shown an untrained photo, it answers with the name of the object in the photo. Dramatic improvements in the performance of this object recognition task triggered the current artificial intelligence boom. However, no matter how much the performance of the system processing improves, without input data, the machine will not produce the expected results. Therefore, we need machines that can obtain information from the real world and interact with it.
Prof. Kuniyoshi’s research is “to understand how human intelligence interacts with the real world during development and how it is learned by the brain and nervous system. We integrated the medical data of human fetuses to create a body model on a computer and move it spontaneously in the intrauterine environment. By doing so, we are studying how cognition is acquired.” People have historically created their own ethics, values, and personalities. He argued that in the future, in order for artificial intelligence to make judgments that will gain the conviction and trust of humans, “robots and artificial intelligence will have to have a mind like humans do.”
After the lecture, Prof. Sakura, Prof. Shiroyama, and Prof. Ema, the co-instructor of this class, had a panel discussion. In response to Prof. Sakura’s question about what it means to have a “robot with a human-like mind,” Prof. Kuniyoshi said that “mind” has gradations, such as between animals and humans, individual differences, and individuality, and that it is very difficult to implement it in a machine. He said that he would like to discuss the issue with people from all fields. Prof. Shiroyama pointed out that the law draws the line in such a continuous situation. For example, the law draws a line between what is abortion and what is not. In addition, new values and laws can be created using robots and artificial intelligence. It is important for society as a whole to work on drawing a line between technology and society as they interact with each other.
Finally, they all pointed to the importance of dialogue as a viewpoint that they wanted the students to learn in this class. Prof. Sakura mentioned “dialogue between the creator and the user,” Prof. Shiroyama mentioned “dialogue between the creators,” Prof. Kuniyoshi mentioned “dialogue on how to create the world of the future,” and Prof. Ema mentioned “intercultural and international dialogue.” Through a total of 13 classes, we will think about “the society that artificial intelligence permeates” together with the participants.
(Writing by Ema)
The lecturer for the second class is Prof. Sakura, who works across various fields related to informatics, such as the sociology of science and technology and science communication. First, in considering the question, “Are robots friend or foe?,” he gave examples of technologies that have been used for purposes other than those intended by their creators in society, and confirmed that technology and society are intricately intertwined and that technology has changed society and society has changed technology. He said that society tends to be averse to new technologies, but it is important to find a function in robots as a “new window” that can change the way we look at humans. He also argued that in order to do so, robots could function as a medium to bridge the gap between humans and machines, just as monkeys and chimpanzees functioned as a medium to bridge the gap between humans and animals.
After the lecture, the students had a discussion among themselves and were asked to submit their comments later. They exchanged opinions from various viewpoints, such as the meaning of the debate itself, the advantages and threats associated with robots because they are close to humans in terms of shape and voice, the causal relationship between social problems and robots, and the importance of taking a stance to get rid of the unique Japanese way of thinking. Students who felt the gap between robot researchers and society and students who examined the background of the threat of robots in society both seemed to be able to recognize that there is a great gap between society and those of us who study robots brought about by information.
Finally, because the question was unrealistically vague, we were able to share information about the usefulness and dangers of the technologies we each felt, and I believe that this enabled us to have a lively discussion.
(Responsibility: Teaching Assistant Shiori Masaoka)
In the third session, Mr. Kitazawa, Director and COO of Money Design, Inc., gave a talk. Mr. Kitazawa spoke about the issues surrounding the progress of financial technology and the new trend called Fintech, and then introduced actual examples of asset management using robo-advisors.
The key phrase was “democratization of finance.” There is a strong affinity between finance and technology, and today, ultra-high-speed trading on the millisecond level, called high-frequency trading (HFT), is being conducted using algorithms. However, some national bodies, including the U.S. Securities and Exchange Surveillance Commission (SEC), have begun to regulate this type of trading by major investment banks and hedge funds, claiming that it is unfair. While the technology is monopolized by the “big guys” who have the capital and knowledge, the financial literacy of individuals is still not high. Mr. Kitazawa has high hopes for the latest information technology as a means to realize this “democratization of finance.”
The term “Fintech,” a word coined by combining “finance” and “technology,” refers to financial services that make full use of modern IT. Fintech is the financial version of IT, where newcomers provide unprecedented value and mechanisms, such as Amazon in retail, airbnb in accommodation, and UBER in car delivery. Since last year, Money Design, Inc., has been offering a service using robo-advisors as a way for anyone to start asset management. This is an algorithmic trading service that uses an algorithm to analyze the user’s asset management needs based on a few simple questions, and then determines the allocation of a portfolio of mutual funds accordingly. The degree of use of AI technology in the areas of profiling, asset allocation, and trading varies, and although the practical application of AI is still in its infancy, the talk gave us a sense of its potential.
During the discussion, a variety of issues were raised and discussed, including the risks and responsibilities of AI transactions, how to deal with new situations that AI may bring about, and how to expand the uses of AI, including the evaluation of investment targets as well as the nature of the financial industry itself and the broader social structure as a whole. On the other hand, there were opinions about the financial industry itself and the broader social structure as a whole. Many of the participants had commented beforehand that they were not familiar with finance and fintech, but after listening to the concrete examples and the major trend of “democratization of finance,” they started to think again about how to use money. At the same time, many participants felt that it is necessary to educate not only the technology but also the people who use it, that is, to improve financial education in secondary and higher education.
(Responsibility: Tomohiro Inoguchi, Teaching Assistant)
Professor Shishido of the University of Tokyo’s Graduate School of Law and Politics spoke on the topic of “legal issues of artificial intelligence.” As AI permeates society, it is necessary from the existing constitutional perspective of freedom of expression to ensure freedom of information distribution, but also to ensure that profiling is not abused and social discrimination does not occur. Dr. Shishido considers ways to balance this with protection to ensure that this does not happen.
Therefore, in balancing the benefits and risks, we need to remember that there will always be a gap between too fast technological innovation and gradual social change. Policies to bridge this gap require a dialogue between the natural sciences and the humanities and social sciences, not just the legal profession. He said that the work of a lawyer is to interpret the events of exclusive disjunction in a Venn diagram so that they can be resolved proportionally, and that this work may be similar to AI in that it analyzes feature values. In the balance between social problems and expectations of AI, he pointed out that individuals need to learn about AI as a challenge, and organizations need to rethink the way they make conscious decisions and change their roles. In response, the government must work on effective redistribution for appropriateness and tighter regulations for AI. He said that the existing legal infrastructure may not be adequate to resolve problems of AI.
In addition, using labor law as an example, this paper discusses how issues such as employment maintenance, income maintenance, the burden of job training, and an environment suitable for knowledge workers will become apparent as AI permeates society, and how the trolley problem is a problem in moral philosophy that has existed for a long time and has only become apparent with the development of AI. In the transitional period where AIs communicate with each other on the information and communication network, where there are pedestrians who are comfortable with automated vehicles and pedestrians who are not, and where there are automated vehicles and conventional vehicles, a variety of cases are possible. For example, in this transitional period between automated and conventional vehicles, if an accident involving an automated vehicle is caused by a brake failure or a system error, it may seem that it can be handled by the existing precedents; however, the responsibility for the cause may vary for the seller, manufacturer, software provider, or individual regarding updating the software appropriately.
From these discussions, students thought about the possibility of self-determination wavering due to the introduction of AI from various perspectives, such as pointing out that it is a problem for the users and, contrariwise, a problem for the creators, and that it is meaningful or meaningless to reduce the risks of AI at the development stage. It was also pointed out that it is a problem for the creators of AI, as opposed to users.
(Responsibility: Teaching Assistant Shiori Masaoka)
What are the expectations and fears that people have of “artificial intelligence”? From these questions, we conducted a “task division workshop” in Student Workshop 1 moderated by Professor Ema.
The process of the workshop is simple: A group of 4–5 people come up with an everyday task (e.g., cooking) and classified it into four categories based on two axes: “I want to leave it to a machine/I don’t want to leave it to a machine” and “possible within 10 years/not likely within 10 years.” If you want to leave cooking to a machine, you can. Suppose that people are divided into those who want to leave cooking machines and those who do not want to. In such a case, instead of arguing about who is “right,” we should try to see if we can reach a consensus by dividing the task of cooking into smaller categories. For example, cooking can be divided into “cooking” and “cleaning up after dishes,” and we might be able to agree that we don’t want to leave the former to a machine, but we want to leave the latter to a machine. In this way, the participants could mutually understand the tasks that the other participants wanted to leave to the machine.
Since I am an engineer myself, and about half of the participants in this class were students of humanities and social sciences, such as public policy, I was very much looking forward to seeing what kind of opinions would come out, but I was also somewhat scared. When I opened the door, the results were very interesting. First, out of the four categories, most of the tasks in every group were classified as tasks that we would like to leave to machines that would be possible within 10 years. I wonder what humans will do if all of these tasks are actually automated, and I wonder if the fact that all of these tasks are possible within 10 years means that expectations for AI are too high, or that they will be automated, or that humans are not doing tasks that cannot be automated in 10 years. Why is this?
What surprised me the most was that many of the students took an extraordinary interest in what I considered relatively trivial tasks. For example, I brush my teeth not because I want to, but because I cannot stay healthy without it. I am not motivated to brush my teeth. This may be the case for many people. However, in this workshop, several students insisted that they were particular about brushing their teeth and did not want to automate the process. For many people, it is an annoying daily “task,” but there are people who are interested in it for its own sake. In addition to brushing teeth, there were also students who wanted to automate, for example, muscle training, and there was a difference between people who were interested in the result and those who were interested in the process as well.
There is a huge demand for automating everyday tasks, and if something could be developed with economic rationality, it would have a huge impact. However, it would not be very meaningful to automate such tasks because people do not easily let go of the self-purpose of the process itself. Knowing what kinds of everyday things people are obsessed with will probably lead to the effective use of technology.
Tasuku Jinnai (The University of Tokyo, 2nd year master’s student)
In this workshop, participants took the initiative to discuss artificial intelligence and daily life in several groups. At the beginning of the workshop, we divided the participants into tasks based on the themes we prepared and then they created their own tasks that they could discuss based on their daily lives and what they usually think about. Since there were not many people who specialize in artificial intelligence research among the participants, I felt that the discussion about whether it is possible or impossible within 10 years was based on approximate judgments and did not become the focus of much discussion. However, there was a very active discussion in all groups over “I want to leave it to the machines” versus “I do not want to leave it to the machines.”
In the first session, I felt that tasks were rarely entered into “without agreement.” The tasks prepared for the first session ranged from everyday life to social issues. However, as we proceeded with the discussion, I felt that there was little disagreement about “elections” or “excrement disposal for people who need care.” I think this is due to the fact that there are many people who have similar ideas, mainly in terms of education, major, and social experience.
In the second session, we divided up familiar daily tasks. In this study, we created tasks based on “everyday life.” Therefore, many tasks were classified “without agreement.” I felt that this was due to the fact that people’s ideas of “everyday life” are completely different even though we have the same cultural background. My group came up with the task of “brushing my teeth.” For me, this was a task I would like to mechanize. For others, however, brushing their teeth is a part of their daily life, and they do not want it to be mechanized.
Looking at the “everyday” is a way to rediscover your “obsessions.” It is this “obsession” that motivates us to take action. I wonder if it is okay for machines to take away this “obsession” It is not often that we have the opportunity to think about our daily lives with people from different positions. We also find it difficult to understand each other’s differences. I think this was represented as “no agreement” in task classification.
But in fact, I felt that it is in this area of no agreement that freedom of choice exists, and that there is something that allows us to make our own decisions. There is no room for choice in a completely mechanized world. I have no doubt that we will use mechanized things. For example, would a student educated in the smartphone generation use a less capable means of communication?
By looking at my “daily life” in this WS, I was able to reconfirm my “obsession” that I want to keep as my daily life. This commitment allowed me to live as myself. I thought that no matter how mechanized we become, as long as we do not give up our obsessions, we can still live as human beings. At the same time, new seeds are produced. New “jobs” and “activities” that have never existed before have the potential to expand the benefits of human beings.
When it comes to artificial intelligence and machines taking away our jobs, rather than thinking about grand ambitions, it might be better to think about what we want to do in our daily lives. I hope that by discussing this with others, we will be able to take things in a better direction. With this in mind, I hope that discussions among researchers will become even more active.
Takeyo Todo (Tokyo Institute of Technology, 1st year doctoral student)
This time, Prof. Ema from the Graduate School of Arts and Sciences, the University of Tokyo, spoke on the topic of “Interdisciplinary Research and Communication on Artificial Intelligence and Society,” focusing on the field of ethics.
First, she introduced a wide range of types of ethics, dividing them into two main categories: descriptive ethics, which is a social-science, historical, or anthropological approach that describes what is said to be a fact, and normative ethics, which considers what is essential to value itself, rather than what is valued. In addition to research ethics, the concepts of AI ethics and ethical AI have been raised. In addition to research ethics, which concerns how individual researchers should act and make decisions in their research, AI ethics includes how to create legal policies and how to educate people about AI. The term “ethics” is discussed from various perspectives, and many reports and guidelines on AI and ethics have been published.
Therefore, she gave us an explanation based on the historical background to what kind of technology evaluation cases there have been up to now, and why people in various fields think about AI. As a premise, researchers have the idea of social responsibility. The Pugwash Conference, an international conference of scientists calling for the abolition of all nuclear weapons, and the Asilomar Conference in 1975, which discussed measures to address the issue of responsibility for gene editing technology, are conferences that were born out of the social responsibility of engineers for their research. The international conferences currently being held around the world to consider the state of AI research must involve a wide range of people and think in an interdisciplinary manner with a broad framework, keeping in mind long-term concerns and control issues as well as short-term “disruptive” developments. This idea is based on the accumulation of “social responsibility of researchers” and “discussion of cutting-edge technologies from the budding stage” that have been discussed regarding nuclear power and the life sciences.
In the discussion, the IEEE Ethically Aligned Design report was discussed, and it was pointed out that from the perspective of AI creators, regulations are supposed to constrain research and development. On the other hand, if we are going to create interdisciplinary guidelines, we should prioritize the penetration of AI in primary education. In the class, an interdisciplinary discussion like the AI conference was held.
(Responsibility: Teaching Assistant Shiori Masaoka)
Prof. Shiroyama of the University of Tokyo’s School of Public Policy spoke on the topic of “Technology Assessment and Artificial Intelligence.” Since many of the students had been thinking about the comparison between the IEEE report introduced in the previous lecture and the Ministry of Internal Affairs and Communications (MIC) report, Prof. Shiroyama started by explaining why the MIC is making reports on AI. He also made us aware of the reality that in many countries outside Japan, companies and research institutes are the main producers of reports.
Here, first of all, technology assessment (TA), which is also a theme, is the role of organizing information on impacts and effects, such as informing citizens and the government of benefits and risks, which are the impacts of scientific and technological developments in society, from an independent standpoint, and supporting citizens’ decision-making and government policy making. Japan has a strong total systems approach and tends to let people decide the answer, such as which is better. However, the concept of TA has not been absent in Japan; it has been around since the 1960s. Unlike in other countries, it did not spread due to methodological and institutionalization problems such as ad hoc practices, fragmented perspectives such as stove-piping, and evaluation of individual parts but not the whole. Nowadays, various organizations such as the Diet, the executive branch, and research and development institutions are moving toward institutionalization.
Risk assessment is an important part of TA. Risk assessment focuses on the social impact of damage, which needs to be calculated from data based on safety factors and other factors, as well as considering the range of definitions of damage. At the same time, we must consider the multifaceted nature of the benefits and risks. As in the case of nuclear power technology, the evaluation of the technology often changes when international relations are involved or when the objectives of society change, and it is often a complex risk issue that needs to be addressed.
He summarized the issues in AI impact and risk assessment as follows: how to determine the target of assessment that balances benefits and risks; how to categorize risks such as consumer rights and interests and functional risks; how to consider future scenarios rather than short-term ones; and the importance of framing the assessment in terms of which field and which technology. Finally, he summarized the importance of framing the assessment in terms of the field and which technology.
In the post-lecture discussion on the topic of AI and where it should be discussed, some students were interested in the initial discussion, such as whether there is a need for further discussion when the arguments are already clear, and what to define AI as. I felt that the students were able to realize the difficulty of the “AI discussion.”
(Responsibility: Teaching Assistant Shiori Masaoka)
This time, Mr. Kabata of the Asahi Shimbun’s Science and Medical Department spoke on the theme of “Dual-Use Technology and Military Applications.” Mr. Kabata, who has a background in science and currently writes articles as a specialist reporter, was also involved in the coverage of the Fukushima Daiichi Nuclear Power Plant after the Great East Japan Earthquake as a desk clerk.
The first topic discussed was the background of the term “dual use.” In light of the definition of “dual use” as being able to be used for both military and civilian purposes, it can be said that any technology can be used in dual fashion. To maintain the military-industrial complex in the face of military spending cuts after the end of the Cold War, it was necessary to promote civilian use of military technology. The Defense Advanced Research Projects Agency (DARPA), which played a major role in the establishment of the Internet and the development of GPS, is a typical example of this. In Japan, attempts have been made to use missile seeker technology for radiation therapy. In contrast to the progress of artificial intelligence technology, dual use is also becoming a problem, with attempts to use it for military purposes, such as combat robots, and corresponding regulatory moves.
In 1950 and 1967, the Science Council of Japan issued a statement to the effect that “scientific research will not be conducted for war or military purposes,” and the Diet passed resolutions for the peaceful use of space and nuclear energy. There was a movement, of which the Basic Act on Space Policy in 2008 and the revision of the Japan Aerospace Exploration Agency (JAXA) Act in 2012 are typical examples. Currently, military research funds are flowing not only to the defense industry but also to universities through research grants from the U.S. military and the Ministry of Defense’s Security Technology Research Promotion Program.
In response to these developments, the Science Council of Japan finally launched the “Review Committee on Security and Academia” in May 2016, and discussions centered on the following three points: (1) whether research for self-defense is allowed; (2) whether civilian research can be distinguished from military research; and (3) whether academic freedom can be protected under the Ministry of Defense system. The report and statement based on the deliberations up to February 2017 were also issued, but we were told that in addition to these issues, various other issues remain, such as how far it is possible to classify research funds according to their source, whether regulations will hinder the promotion of research, and what to do about the overall depletion of research funds in the first place.
The discussion also focused on a very specific topic: “After defining ‘military research,’ from the standpoint of the President of the University of Tokyo, identify the pros and cons of applying for both the Ministry of Defense’s Research Promotion Program for Security Technology and the U.S. military grant, and provide the rationale for your position.” This is a specific topic for discussion. A variety of issues were raised, such as how to ensure the freedom to do or not to do certain research when there is a relationship between faculty members and supervising students, and how to consider the political nature of the University of Tokyo and the position of its president. How to connect these governance issues with specific artificial intelligence technologies was a topic that made each of us think.
(Responsibility: Tomohiro Inoguchi, Teaching Assistant)
On this day, we brainstormed for the final project submission.
In this tenth session, novelist Toshiji Hase gave a talk titled “Imagination in the Age of Science Fiction Made Real.” Mr. Hase, who mainly writes science fiction (SF) and fantasy novels, emphasized that the imagination of SF is about “doubt.” If we trace the origins of science fiction back to the 19th century, we can see that it was a time of dismantling “myths,” as represented by Darwin’s theory of evolution. In science fiction, AI has been used in a variety of ways, from robots in Isaac Asimov and Osamu Tezuka’s Astro Boy to non-physical beings in 2001: A Space Odyssey, reflecting the characteristics of their times; however, the relationship between intelligence and the world has been raised.
So, what exactly is imagination? According to Mr. Hase, who has experienced as a writer the need to “create something that doesn’t exist now, something new anyway,” imagination is not a single intellectual ability, but rather there are many different kinds of imagination. There are many different kinds of imagination, ……such as “explaining to the world at large,” “creating new approaches to well-known things,” “proposing visions,” and “filling in or expanding relationships.” However, all of them are different. On the other hand, everyone has some kind of imagination, and the situation of “needing imagination” is simply a matter of the imagination that is not aligned with the imagination needed to solve the problem. Communication with others, such as interviewing and brainstorming, is a way to compensate for one’s own lack of imagination, but Mr. Hase points out that this area of imagination is no longer the exclusive domain of humans, and that AI may be able to assist or replace it in the future: “I think that AI will assist and replace humans in the future.”
The focus was on the meaning of “doubt.” Mr. Hase emphasized that to doubt is to apply the tools of imagination to things we do not want to or are hesitant to direct them at, and to draw inferences and answers from them. By doubting, we are able to reconstruct our thoughts and thus the world, and what is required of us is not only to have imagination, but to use it responsibly and freely. He also told us that since imagination is a matter of subjectivity, it is always biased, and it is important to question our own imagination in order to be aware of it.
In the discussion, the theme was “How would we construct a roadmap for the research and development goals and industrialization of artificial intelligence?” based on materials from the Council for Strategic Studies on Artificial Intelligence, with the aim of using imagination and doubt. The students raised issues such as doubts about what the roadmap is for and whether technological development will proceed according to the roadmap, as well as a lack of perspective on user benefits and the issue of how to enhance imagination through cooperation with AI. The theme of this session was the use of imagination as a basic attitude in discussing advanced technologies, and the comments after the class indicated that it was a different and impressive experience from the discussions on various individual technologies.
(Responsibility: Tomohiro Inoguchi, Teaching Assistant)
This time, we invited Prof. Fujita from the University of Tokyo Hospital to speak on the theme of “ELSI and Policy on Medical Artificial Intelligence.” After graduating from the Faculty of Medicine at the University of Tokyo, Prof. Fujita studied at the Research Center for Advanced Science and Technology and the Graduate School of Law and Politics, and is now working at the Faculty of Medicine at Keio University and the Graduate School of Economics at Nagoya University. He is well versed in the social aspects of healthcare, including healthcare policy and healthcare economics.
The relationship between medicine and artificial intelligence can be traced back to an expert system developed at Stanford University in the 1970s. This system was not put to practical use at the time, although it produced better results than non-specialists as a diagnostic system for contagious blood diseases. Today, however, skin cancer diagnostic software developed at Stanford is capable of making diagnoses with the same accuracy as those of specialists. In addition to diagnosis, AI is expected to be used in various other areas, such as treatment planning and implementation, drug discovery, preventive medicine, and the generation of new medical knowledge. The Ministry of Health, Labour and Welfare (MHLW) also has high expectations as part of the utilization of ICT, and is trying to accelerate AI development in six priority areas: (1) genomic medicine, (2) diagnostic imaging support, (3) diagnosis and treatment support, (4) drug development, (5) nursing care and dementia, and (6) surgical support.
Following this background, Prof. Fujita introduced actual examples of research in the field of psychiatry in which he is currently involved. These include research on the development of social networks that enable elderly people with dementia to engage in economic activities safely and autonomously, research on the evaluation of mental illness through the collection and analysis of multimodal data, and research on the prevention and early detection of mental illness through natural language processing, along with the legal and policy issues involved in conducting such research.
In addition, he covered the issues of medical research and ethical, legal, and social issues and their discussion. In particular, he talked about the issue of personal information, including the disarray of protection laws and amendments to laws.
The discussion focused on the expectations of AI in medicine and medical care, the ELSI issues that need to be addressed in order for AI to spread in society, and what each of us can do to help with these issues. There was also a discussion of the necessity of discussing what a doctor is in the first place and what is required of a human doctor in the face of AI. As for the issue of social discrimination due to medical information, there was also a viewpoint that, rather than being a problem specific to AI, a problem of medical care itself may just be manifesting itself through AI. While there were both optimistic and pessimistic predictions for the future, the session made us realize that how we deal with the issues that have emerged with the advancement of technology could be the turning point that determines the future.
(Responsibility: Tomohiro Inoguchi, Teaching Assistant)
Mr. Shinya Kobayashi of Farmnote Co., Ltd. talked about the current status of AI applications in agriculture. The city of Obihiro and its surrounding area, where Farmnotes is based, is one of the areas in Hokkaido where dairy farming is particularly active, but the number of farmers and the number of cows they raise are decreasing year by year, and dairy farming in particular has decreased by 40% in the last 10 years, raising concerns about the instability of production and supply. We first learned about the business of Farmnotes, which is trying to solve these problems by introducing an agricultural IoT.
For example, “Farmnote Color” is a system that measures cow behavior through a wearable device worn around the neck of the cow, detects signs of estrus and disease through analysis by artificial intelligence, and notifies the breeder’s mobile device. It could be said that this system only “displays cow information on a smartphone,” but for dairy farmers who manage reproduction through artificial insemination, the potential loss of conception opportunities due to missed estrus is not small. By introducing AI to the manual checking process, the pregnancy rate has been greatly improved, and the burden on dairy farmers has been greatly reduced. Farmnote also offers other services that utilize the simple process of data collection, analysis, and notification in various aspects, such as cattle behavior analysis using a low power wide area network (LPWAN) and GPS, and a system that collects and analyzes all kinds of agricultural data including weather conditions in an integrated manner and notifies smart devices. Research is being conducted to develop services that utilize the simple process of data collection, analysis, and notification in various aspects. The introduction of such advanced technology into agriculture to improve productivity is called “AgriTech,” and it is a field that is attracting attention worldwide, with more than $10 billion invested in startups in the US over the past three years.
In the latter half of the session, Mr. Kobayashi talked about the vision that Farmnote is aiming for, including “community” as a key word. Among the problems facing agriculture, there are problems related to the separation of farming households and the aging of the population in Japan, as well as the decrease in the area of farmland per capita on a global level due to the decrease in farmland and population growth. Farmnote’s philosophy is to realize the revitalization of local communities through technology, and the company has been working with farmers, agricultural cooperatives, and other partners in a variety of community-building activities that go beyond the development of technology.
Discussions were held on two themes: “What kind of policies will improve agriculture?” and “Why do we need to maintain local communities?” What struck me was that several people pointed out that it is difficult for today’s students to have opportunities to gain a sense of reality about agriculture. There was a lively discussion on how to deal with the opinions of young people who want to leave the region and how to let them know about agriculture. On the policy side, there were opinions on the balance between subsidies and other support, as well as the balance between support for agricultural support groups and funding for projects that only the government can provide. Mr. Kobayashi said that he aims to create collaboration by combining technologies such as robots and artificial intelligence with work that only people can do, and to become a world-class “Google of agriculture” Mr. Kobayashi said that his goal was to create a “Google of agriculture” that can be used globally.
(Responsibility: Tomohiro Inoguchi, Teaching Assistant)
Dr. Sugiyama of the University of Tokyo Graduate School of Frontier Sciences spoke at the RIKEN Center for Advanced Intelligence Project (AIP). Dr. Sugiyama specializes in the theory and algorithm development of machine learning, such as information science and statistics, as well as real-world applications of machine learning technology in robotics, brain waves, medicine, and life, and hopes to use theoretically sound technology to create something useful for society.
ganization that promotes basic research in the next 10 years. He believes that promoting basic research will lead to the development of next-generation AI infrastructure technology. In the short to medium term, the Center will strengthen the fields in which Japan excels, solve social problems, analyze and disseminate ethical, legal, and social issues (ELSI) in the AIP field, and utilize current advanced AI technology. In the medium to long term, the AIP Center aims to develop the next generation of basic AI technologies by solving difficult problems that cannot be solved by deep learning, such as limited information learning and causal inference, and by creating new industries based on advanced mathematics with the involvement of companies.
In the machine learning behind artificial intelligence, it was mentioned that processing big data with deep learning would improve the technology of artificial intelligence, while there are opinions that processing with recent paraclassification would work well, and that in the end, we may not be able to escape the dimensional curse even for big data. In other words, for practical use in the real world, we need to learn from limited information with high accuracy, so it is important to separate models and learn for generalizability. Since there are many cases that cannot be classified by existing learning methods such as supervised, unsupervised, and semi-supervised classification, Professor Sugiyama established a new classification method that can achieve the same convergence rate as supervised learning from positive and unlabeled data. However, he summarized that it is necessary to play a role in overseeing the basic research that continues to produce such new learning methods, as well as applied research and impact research.
From this lecture, the students seemed to feel the depth of the artificial intelligence field. There were many engineering students who were honestly surprised at the difference between the humanities and sciences in terms of their understanding of the lecture content, as well as many public policy students who were surprised at the difference between the media and engineers in terms of singularity. I feel that the reactions of these students represent a microcosm of the current AI field. Rather than how to communicate in an easy-to-understand manner, what the sender needs to communicate and what the receiver needs to learn for him/herself, I thought that each one of us should summarize the lecture so far and think about “AI and Society.”
(Responsibility: Teaching Assistant Shiori Masaoka)