Program

July 11 and 12, 2021 (two-days) @ Online

DAY 1 : July 11, 2021 (Sunday) @ Zoom

Introduction

( Chair: Dr. Hailong LIU )

Part I: Invited presentations

( Chair: MSc. Chen Peng )

Invited presentation 1 (15min+5min)

July 11, 2021 (Sunday) | JST: 14:16~14:36 | CST: 13:16~13:36 | CEST: 07:16~07:36 | BST: 06:16~06:36 | PDT: 22:16~22:36 (-1 day)

Challenges in human factors for safe and trustworthy automated driving

Dr. Satoshi Kitazaki

National Institute of Advanced Industrial Science and Technology, Japan

Biography

Satoshi Kitazaki is Director, Human-Centered Mobility Research Center, AIST. He has been leading the national research project on human factors in automated driving since 2016. He received his Bachelor’s and Master’s degrees from Kyoto University, Japan, and Ph.D. from the University of Southampton, UK. He has experiences of working in the automotive industry, Nissan Motor, and also working in the academia, University of Iowa, before joining AIST in 2015.

Abstract:

Automated driving in levels 2 and 3 still require driver’s engagement in the driving task partly and occasionally. The driver and the system are expected to work together to respond to the task demand determined by the environment and the system. Therefore, the total performance of the vehicle is sum of system performance and that of the driver. When the driver does not fulfill required driver’s role, the total performance does not reach the task demand and unsafe consequences can happen. There are several factors that hinder the driver from fulfilling the required driver’s role. Driver’s over-trust on the system functions is one of the major factors. Over-trust can delay or obstruct driver’s takeover actions. It can also induce drowsiness, looking away (for level 2), or engagement in illegal non-driving related activities, all of which can bring the driver and the vehicle unsafe consequences in critical situations. In this presentation, some findings on effects of driver states on takeover performance will be shared. The project was a part of national project on automated driving, SIP-adus, funded by Cabinet Office, the Government of Japan.

Invited presentation 2 (15min+5min)

July 11, 2021 (Sunday) | JST: 14:37~14:57 | CST: 13:37~13:57 | CEST: 07:37~07:57 | BST: 06:37~06:57 | PDT: 22:37~22:57 (-1 day)

Automated Driving:

Towards Trustworthy and Safe Human-Machine Cooperation

Dr. Philipp Wintersberger

Vienna University of Technology, Austria

Biography

Philipp Wintersberger is a researcher at Vienna University of Technology (TU Wien). He obtained his doctorate in Engineering Science from Johannes Kepler University Linz specializing Human-Computer Interaction and Human-Machine Cooperation. He worked 10 years as a software engineer/architect before joining the Human-Computer Interaction Group at the center for automotive safety CARISSMA in Ingolstadt to research in the area of Human Factors and Driving Ergonomics. His publications focus on trust in automation, attentive user interfaces, transparency of driving algorithms, as well as UX/acceptance of automated vehicles and have received several awards in the past years.

Abstract:

Trust is one of the core constructs determining how users interact with automation. Especially in the driving environment, falsely attributed trust can lead to safety-critical situations and even death. Consequently, trust is increasingly addressed in scientific experiments on driving automation. In this presentation, I will discuss a theoretical perspective of trust and relate the concept to known issues in the human factors literature. I will further explain why I believe terms such as “trust calibration” or “trust measurements” are misleading, and why researchers should be careful when integrating “trust in automation” in their works. I will finally present some high-level findings emerging from various trust-related experiments, which could become a basis for future research efforts in this area.

Invited presentation 3 (15min+5min)

July 11, 2021 (Sunday) | JST: 14:58~15:18 | CST: 13:58~14:18 | CEST: 07:58~08:18 | BST: 06:58~07:18 | PDT: 22:58~23:18 (-1 day)

Can I cross the road?

Understanding pedestrians' trust in an approaching AV

Dr. Yee Mun Lee

University of Leeds, UK

Biography

Yee Mun Lee obtained her BSc (Hons) in Psychology and her PhD degree in driving cognition from The University of Nottingham Malaysia in 2012 and 2016 respectively. She is currently a research fellow at the Institute for Transport Studies, University of Leeds. Her current research interests include investigating the interaction between automated vehicles and other road users, by using various methods, especially virtual reality experimental designs. Yee Mun was the leader of the 'Methodologies, Evaluation and Impact Assessment' Work Package of the EU-funded project, interACT (www.interact-roadautomation.eu). She is now involved in another EU-funded project, L3Pilot (www.l3pilot.eu), where she investigates the Users' Evaluation and Experience of a Level 3 system. Finally, Yee Mun is also one of the SHAPE-IT project supervisors, where she continues her research on Human interaction with AVs in Urban Scenarios (www.shape-it.eu).

Abstract:

In the future, Automated Vehicles (AVs) will need to interact with other road users, such as cyclists, pedestrians, and other vehicles. To enhance safety, improve traffic flow, and increase user acceptance and trust in AVs; pedestrians and other road users need to understand the AVs' intentions, communication, and behaviour. As the AVs will not be controlled by onboard drivers anymore, any forms of conventional communication via the drivers will be missing (i.e., hand gestures, head nodding). Although there is mixed evidence as to the extent to which these types of explicit communication arise, new forms of external Human-Machine Interfaces (eHMIs) have been designed in an effort to replace human-human communication. They also aimed to increase the acceptability and safety of AVs. This presentation will look into how implicit cues (i.e., vehicle movement), eHMIs, and drivers' presence affect pedestrians' crossing behaviour and subjective evaluations. The state-of-the-art literature and work completed as part of the interACT project will be presented.

Part II: Paper presentations

( Chair: Dr. Hao Cheng )

Paper presentation 1 (10min+5min)

July 11, 2021 (Sunday) | JST: 15:30~15:45 | CST: 14:30~14:45 | CEST: 08:30~08:45 | BST: 07:30~07:45 | PDT: 23:30~23:45 (-1 day)

In AVs we Trust: Conceptions to Overcome Trust Issues in Automated Vehicles (Preprint Paper)

Kai Holländer*

Abstract:

To take advantage of the full potential of highly automated vehicles (AVs), users need to trust the system enough to be willing to engage with the novel technology. In this context, users of automated vehicles face the challenge of understanding the system capabilities. While interacting with an AV users need to calibrate between overtrust (trusting the vehicle beyond its capabilities and underestimation of the consequences if the system fails) and undertrust (not relying on the vehicle even though it is capable of handling the situation perfectly well). For this work we look at crucial aspects which should be considered for the calibration of trust, such as: proper training, appropriate user interfaces and how to possibly measure trust. We believe it is very important that users of AVs understand the systems capabilities and calibrate their trust accordingly. The findings and ideas of this work are mainly based on previous work which has been revised regarding trust in AVs. The most important takeaway is that users of AVs need another training and mental model than drivers of manually

driven vehicles.

Paper presentation 2 (10min+5min)

July 11, 2021 (Sunday) | JST: 15:46~16:01 | CST: 14:46~15:01 | CEST: 08:46~09:01 | BST: 07:46~08:01 | PDT: 23:46~00:01 (-1 day)

Influences on Drivers’ Understandings of Systems by Presenting Image Recognition Results (Preprint Paper)

Bo Yang*, Koichiro Inoue, Satoshi Kitazaki, Kimihiko Nakano

Abstract:

It is an essential issue to help drivers have an appropriate understandings of level 2 automated driving systems. A human machine interface (HMI) was proposed to present real time results of image recognition by the automated driving systems to drivers. It was expected that drivers could better understand the capabilities of the systems by observing the proposed HMI. Driving simulator experiments with 18 participants were preformed to evaluate the effectiveness of the proposed system. Experimental results indicated that the proposed HMI could effectively inform drivers of potential risks continuously and help drivers better understand the level 2 automated driving systems.

Paper presentation 3 (10min+5min)

July 11, 2021 (Sunday) | JST: 16:05~16:20 | CST: 15:05~15:20 | CEST: 09:05~09:20 | BST: 08:05~08:20 | PDT: 00:05~00:20

How can design help enhance trust calibration in public autonomous vehicles?(Preprint Paper)

Yuri Klebanov*, Romi Mikulinsky, Tom Reznikov, Miles Pennington, Toshihiro Hiraoka, Yoshihiro Suda, Shoichi Kanzaki

Abstract:

Trust is a multilayered concept with critical relevance when it comes to introducing new technologies. Understanding how humans will interact with complex vehicle systems and preparing for the functional, societal and psychological aspects of autonomous vehicles' entry into our cities is a pressing concern. Design tools can help calibrate the adequate and affordable level of trust needed for a safe and positive experience. This study focuses on passenger interactions capable of enhancing the system trustworthiness and data accuracy in future shared public transportation.

Paper presentation 4 (10min+5min)

July 11, 2021 (Sunday) | JST: 16:21~16:36 | CST: 15:21~15:36 | CEST: 09:21~09:36 | BST: 08:21~08:36 | PDT: 00:21~00:36

Improving Take-over Situation by Active Communication (Preprint Paper)

Monika Sester*, Mark Vollrath, Hao Cheng

Abstract:

In this short paper an idea is sketched, how to support drivers of an autonomous vehicle in taking back control of the vehicle after a longer section of autonomous cruising. The hypothesis is that a clear communication about the location and behavior of relevant objects in the environment will help the driver to quickly grasp the situational context and thus support drivers in safely handling the ongoing driving situation manually after take-over. Based on this hypothesis, a research concept is sketched, which entails the necessary components as well as the disciplines involved.