While the social implementation of artificial intelligence (AI) services and products is expanding, issues related to the reliability and transparency of AI are becoming a challenge.
Many country governments, enterprises, and organizations are developing various approaches such as guidelines and tools to mitigate the risks.
However, in many cases, they are not sufficiently used in practice due to the following problems.
Each AI service has different significant risks.
AI models may not be able to mitigate the risks sufficiently and continually.
Sometimes people (including users) realize risks.
In order to solve these issues, a practical framework is needed to consider what are the important risks for AI services and products, who is responsible for those risks, and what assessment metrics and tools to use.
This study group is operated as a joint research project between the University of Tokyo and Deloitte Tohmatsu Risk Services Co., Ltd. and carries out investigation and research of specific cases using the Risk Chain Model (RCModel) developed by a research group at the University of Tokyo.
Details Japanese
As various technologies are introduced to solve problems in the medical sector, AI and IT are being implemented in the clinical field. Not only doctors but also developers and policymakers are involved in the development and implementation of these systems.
The purpose of this study group is to promote the implementation of new technologies in the medical field by sharing useful information in the development and clinical field based on the experiences and knowledge of physicians, developers, and policymakers through seminars. This study group is organized by Institute for Future Initiative of the University of Tokyo, editorial team of M3, Inc., and AI Medical Center of the Keio University. This work is supported by JSPS KAKENHI Grant Number 17KT0157, JST-RISTEX Grant Number JPMJRX16H2, and Toyota Foundation Grant Number D18-ST-0008.
Details Japanese