Why the Trust in Autonomy Paradigm Should be Abandonned

Trust in Automation (TiA) has been the focus of human factor's research as the key to understanding human autonomy teaming (HAT). Early theories about the construct of TiA were developed from the psychological construct of interpersonal trust, and they posited that calibrated TiA was critical for successful HAT system performance (Sheridan, 1980; Sheridan and Hennessy, 1984; Muir, 1994, Muir and Moray, 1996), just as it for successful human teams. There are aspects of interpersonal trust that are analogous to TiA, but it has been debated whether the two constructs are homologous (Madhaven and Wiegmann, 2007). The central position of TiA in research aimed at improving joint system performance is underlined by the volume of literature on TiA, and the number of definitions of TiA. Yet, despite the extensive research about the role of TiA in HAT, human-autonomy teams have yet to reach their envisioned potential.

I argue that TiA exists and is an emergent property of a human working with an automated system, but that levels of TiA do not directly translate into how a human uses or interacts with an autonomy. In fact, I argue, that though TiA exists, its levels at any one point in time are irrelevant to joint system performance. This belief is borne of several pertinent facts about TiA. First, it is not directly homologous to human-human trust (Madhaven and Wiegmann, 2007), and so what we understand about the role in human-human team performance cannot be directly applied to HATs. Second, TiA does not directly map to autonomy use. For example, an operator may overuse an autonomy, that is rely on it more than is warranted given its capabilities, and yet report that he does not trust it (Daly, 2002; Biros et al.,2004). Conversely, an operator may claim to fully trust an automated teammate and then proceed to under-use it or not use it at all (Lee and Moray, 1992). Finally, even if calibrated TiA levels did map onto automation use, it is not possible to measure it in real time, nor is it currently feasible to intentionally control it. Therefore, one wonders, should TiA truly be the focus of research aimed at improving the performance of human autonomy teams.

To conclude, I argue that TiA research has not fulfilled its intent of significantly improving joint system performance, and that alternate research paths ought to be pursued. Instead, research focus ought to pivot towards understanding, predicting, and managing human decisions. While at CCDC (U.S. Army Research Laboratory) I lead the experimental design for research of this kind. Our engineers built a control theoretic human-in-the-loop automated driving task capable of predicting the decision to use or not use the automated driving assistant. They devised an algorithm to determine if that decision was optimal based on current driving conditions, and the individual performance on the driving task measured at the start of the experiment. We then designed an actuator, a visual signal, that aimed to change the participant's decision if it was not optimal. This research indicated that when the participant followed the actuator's recommendation, that joint system performance increased and participant workload decreased.