Program

July 11 and 12, 2021 (two-days) @ Online

DAY 2 : July 12, 2021 (Monday) @ Zoom

Introduction and free talk

( Chair: Dr. Hailong LIU )

Part III: Invited presentations

( Chair: Prof. Keisuke Suzuki )

Invited presentation 4 (15min+5min)

July 12, 2021 (Monday) | JST: 14:16~14:36 | CST: 13:16~13:36 | CEST: 07:16~07:36 | BST: 06:16~06:36 | PDT: 22:16~22:36 (-1 day)

Trust, reliability, and limitation: Over-trust in fully reliable systems

Prof. Makoto Itoh

Tsukuba University, Japan

Biography

Makoto Itoh received BSc, MSc, and PhD degrees from the University of Tsukuba, Japan, in 1993, 1995, and 1999, respectively. His main areas of research interest include shared control, adaptive automation, and the building of appropriate trust as well as the prevention of over-trust and distrust in automation. Itoh is a member of IEEE, HFES, ICE, HIS, JSAE, JSQC, and IFAC TC9.2.

Abstract:

I believe a fully reliable system could be overly trusted.

Trust in an automation is not just a subjective reliability of the automation in a probabilistic meaning. Every machine has a set of conditions within which the machine is supposed to work as the designer intends. Along with the automated driving context, such set of conditions could be called Operational Design Domain (ODD). It is true an automation may make a mistake even within the ODD. You can call it overtrust if (s)he relied on the automation that made a mistake within the ODD. However, the only solution in this case is to improve the quality of the automation. Relying on the automation within the ODD is inherently not a problem. Rather it could be said the automation must be fully reliable within the ODD. On the other hand, people may rely on an automation even if the current situation is beyond the ODD, i.e., the automation cannot work well in the situation. In this sense, the proposition “a human may overtrust in a fully reliable system” could be true if the automation works perfectly within the ODD.

About 20 years ago, I tried to develop a conceptual model to discuss this issue. In this talk, I would like to try it again to obtain deep insights of overtrust in automation.

Invited presentation 5 (15min+5min)

July 12, 2021 (Monday) | JST: 14:37~14:57 | CST: 13:37~13:57 | CEST: 07:37~07:57 | BST: 06:37~06:57 | PDT: 22:37~22:57 (-1 day)

Toward Adaptive Trust Calibration for Level 2 Driving Automation

Dr. Kumar Akash

Honda Research Institute USA, Inc. , USA

Biography

Kumar Akash received the B.Tech. degree in mechanical engineering from the Indian Institute of Technology Delhi, New Delhi, in 2015 and the M.S. and Ph.D. degrees from Purdue University, West Lafayette, Indiana, in 2018 and 2020, respectively, all in mechanical engineering. He is a scientist at Honda Research Institute, Inc., San Jose, California. His research interests include dynamic modeling and control of human behavior in human–machine interactions, brain–computer interfaces, and machine learning.

Dr. Teruhisa Misu

Honda Research Institute USA, Inc., USA

Biography

Teruhisa Misu received the B.E. degree in 2003, the M.E. degree in 2005, and the Ph.D. degree in 2008, all in information science, from Kyoto University, Kyoto, Japan. From 2005 to 2008, he was a Research Fellow (DC1) of the Japan Society for the Promotion of Science (JSPS). From 2008 to 2013, he was a Researcher at NICT Spoken Language Communication Group. In 2013, he joined Honda Research Institute USA, Inc. From 2011/11 to 2012/2, he was a Visiting researcher at USC/ICT.

Abstract:

Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration.

Part IV: Paper presentations

( Chair: Prof. Toshiya Arakawa )

Paper presentation 5 (10min+5min)

July 12, 2021 (Monday) | JST: 15:10~15:25 | CST: 14:10~14:25 | CEST: 08:10~08:25 | BST: 07:10~07:25 | PDT: 23:10~23:25 (-1 day)

Human-Vehicle Cooperation on Prediction-Level: Enhancing Automated Driving with Human Foresight (Preprint Paper)

Chao Wang*, Thomas H. Weisswange, Matti Krüger, Christiane Wiebel-Herboth

Abstract:

To maximize safety and driving comfort, autonomous driving systems can benefit from implementing foresighted action choices that take different potential scenario developments into account. While artificial scene prediction methods are making fast progress, an attentive human driver may still be able to identify relevant contextual features which are not adequately considered by the system or for which the

human driver may have a lack of trust e into the system’s capabilities to treat them appropriately. We implement an approach that lets a human driver quickly and intuitively supplement scene predictions to an autonomous driving system by gaze. We illustrate the feasibility of this approach in an existing autonomous driving system running a variety of scenarios in a simulator. Furthermore, a Graphical User Interface (GUI) was designed and integrated to enhance the trust and explainability of the system. The utilization of such cooperatively augmented scenario predictions has the potential to improve a system’s foresighted driving abilities and make autonomous driving more trustable, comfortable and personalized

Paper presentation 6 (10min+5min)

July 12, 2021 (Monday) | JST: 15:26~15:41| CST: 14:26~14:41 | CEST: 08:26~08:41 | BST: 07:26~07:41 | PDT: 23:26~23:41 (-1 day)

Exploration of increasing driver’s trust in a semi-autonomous vehicle through real-time visualizations of collaborative driving dynamic (Preprint Paper)

Alisa Koegel*, Charlotte Furet, Takaharu Suzuki, Yuri Klebanov, Jenny Hu, Tobias Kappeler, Daichi Okazaki, Kento Matsui, Toshihiro Hiraoka, Kimihiko Nakano, Kentaro Honma, Miles Pennington

Abstract:

The Thinking Wave is an ongoing development of visualization concepts showing the real-time effort and confidence of semi-autonomous vehicle (AV) systems. Offering drivers access to this information can inform their decision making, and enable them to handle the situation accordingly and takeover when necessary. Two different visualizations have been designed: Concept one “Tidal” demonstrates the AV system’s effort through intensified activity of a simple graphic which fluctuates in speed and frequency. Concept two “Tandem” displays the effort of the AV system as well as the handling dynamic and shared responsibility between the driver and the vehicle system. Working collaboratively with mobility research teams at the University of Tokyo, we are prototyping and refining the Thinking Wave and its embodiments as we work towards building a testable version integrated into a driving simulator. The development of the thinking wave

aims to calibrate trust by increasing the driver’s knowledge and understanding of vehicle handling capacity. By enabling transparent communication of the AV system’s capacity, we hope to empower AV-skeptic drivers and keep over-trusting drivers on alert in the case of an emergency takeover situation, in order to create a safer autonomous driving experience.

Chair:

Group A: MSc. Chen Peng and MA. Jingyi Li

Group B: MA. Yang LI and MA. Ruolin Gao