Myke C. Cohen, Erin K. Chiou, Matthew M. Willett, Jayci T. Landfair, Matthew A. Peel, Matthew J. Scalia, Jamie C. Gorman, and Nancy J. Cooke.
ABSTRACT: Designing robots for trustworthiness is often justified by its purported benefits to overall human-robot team performance. However, there is mixed empirical evidence for whether such design approaches truly support robust human-robot teaming through building trust. This may be due to the incongruence between interaction focused robot trustworthiness designs and the proliferation of trust modeling techniques that are most appropriate for broader teaming and trusting timescales. We offer an appraisal of current methodological and analytical approaches for modeling trust within finer interaction timescales relative to emergent cognitive properties of human-robot teams. We then identify challenges that the trust research community must address to produce more precise frameworks for modeling trust, towards more effective human-robot interaction design paradigms.
ABSTRACT: Many measures of human-robot trust have proliferated across the HRI research literature because each attempts to capture the factors that impact trust despite its many dimensions. None of the previous trust measures, however, address the systems of inequity and structures of power present in HRI research or attempt to counteract the systematic biases and potential harms caused by HRI systems. This position paper proposes a participatory and social justice-oriented approach for the design and evaluation of a trust measure. This proposed process would iteratively co-design the trust measure with the community for whom the HRI system is being created. The process would prioritize that community’s needs and unique circumstances to produce a trust measure that accurately reflects the factors that impact their trust in a robot.
Mengyao Li and Emanuel Rojas.
ABSTRACT: This paper explores trust contagion in Human-AI-Robot Teams (HART), focusing on interpersonal influences, especially in co-located scenarios. Unlike dyadic interactions, real-world HART situations involve multiple team members and robots, introducing the potential for trust contagion. The paper defines trust contagion as a process where individuals consciously or subconsciously influence each other’s trust attitudes and behaviors in robots through social interactions. It distinguishes this from trust transfer and transitivity, emphasizing interpersonal influences on trust in the same trustee. Mechanisms involve analytic and affective processes, introducing mimicry and synchrony as potential measurements for the contingent behaviors in HART. The paper advocates for further exploration of trust contagion, offering insights into its occurrence and potential metrics for effective management in dynamic human-robot interactions.
Matthew A. Peel , Jayci T. Landfair, Matthew J. Scalia, Myke C. Cohen, Matthew M. Willett, Nancy J. Cooke and Jamie C. Gorman.
ABSTRACT: Modeling humans and robots together as a team can alleviate many of the challenges of human supervisory control, however, the construct suffers from several technological limitations. In this paper, we argue that mutual adaptation through bi-directional influence is a necessary component of all members of a team and is required for true human-automation teaming to exist. Bi-directional influences require that all members of a team possess the ability to change as a result of team interaction (plasticity) and be capable of detecting changes in others and the environment and dynamically adapting as a result of them (responsiveness). Trust at the team level and an understanding of trust dynamics may provide a promising way forward for imbuing these technologies with the capability of mutual adaptation.
Zahra Rzaei Khavas, Monish Reddy Kotturu, S.Reza Ahmadzadeh, and Paul Robinette.
ABSTRACT: As members of society, humans rely on morality and ethics for constructive coexistence. With the increasing integration of robots into various societal roles, it becomes crucial to consider their behavioral standards in relation to humans. The primary objective of this project is to explore whether undesirable robot behavior, stemming from violations of performance trust, elicits different effects on human trust compared to similar undesirable behavior resulting from moral trust violations. In our prior research, we developed a collaborative search task involving humans and robots, enabling the differentiation between performance and moral trust violations by robots. The outcomes of our previous experiment indicated that moral trust violations by robots have a more pronounced impact on human trust compared to performance trust violations of equal magnitude and consequences. Furthermore, we discovered that these distinct trust violation types by robots can be independently gauged through subjective and objective human-robot trust measures, which encompass various trust dimensions. As a continuation of our investigation, we aim to explore whether the effects of performance trust violations and moral trust violations can be differentiated using alternative human-robot trust metrics, specifically physiological measurements.
Xiaoyun Yin, Shiwen Zhou, Matthew J. Scalia, Ruihao Zhang and Jamie C. Gorman.
ABSTRACT: Over the years, trust in Human-Robot Interaction (HRI) has been extensively analyzed by researchers, not just as a static measure but also as a dynamic process. This paper proposes an approach using Recurrent Neural Networks (RNNs) to predict the dynamics of trust in Human-Robot interactions. To apply RNNs in HRI trust prediction, we propose segmenting time series into smaller windows and using Long Short-Term Memory (LSTM) cells to account for the temporal dynamics of trust.
Matthew J. Scalia, Shiwen Zhou, Ruihao Zhang, Xiaoyun Yin, Nathan J. McNeese, and Jamie C. Gorman.
ABSTRACT: As artificial intelligence (AI) advances from functional to integrative, the embodiment of AI to robotic systems is imminent. In human-robot teaming (HRT), the definition, conceptualization, and measurement of team (dis)trust has been inappropriately scaled and has therefore suffered in the form of misinterpreted results due to (1) not accurately capturing the emergence of team (dis)trust; and (2) hierarchical and temporal data aggregation. In this paper, we compare the emergence of team (dis)trust and its measurement approaches in both information processing theory and dynamical systems theory. We also identify important future research avenues in relation to temporal team (dis)trust measurement and the use of dynamical systems analysis (DSA) and inferential statistics.
ABSTRACT: Wearable robotic systems are a class of robots that have a tight coupling between human and robot movements. Similar to nonwearable robots, it is important to measure the trust a person has that the robot can support achieving the desired goals. While some measures of trust may apply to all potential robotic roles, there are key distinctions between wearable and non-wearable robotic systems. In this paper, we considered the dimensions and subdimensions of trust, with example attributes defined for exoskeleton applications. As the research community comes together to discuss measures of trust, it will be important to consider how the selected measures support interpreting trust along different dimensions for the variety of robotic systems that are emerging in the field in a way that leads to actionable outcomes.
ABSTRACT: This paper delves into the cobra effect of transparency within the domain of trust calibration in automated systems. The cobra effect, as applied to trust calibration is the phenomenon where solutions intended to moderate or repair trust lead to opposite effects in human subjects. It is our position that for user to create mental models that calibrate for appropriate trust, they must first have an adaptive transparency model that does not fall victim to the cobra effect. This paper explores using transparency for trust calibration and how strategies designed to foster appropriate levels of trust in automated systems can inadvertently lead to states of over-trust or under-trust. Special attention is given to the dual role of transparency in automated systems, discussing its potential to both create over trust and under trust in autonomous systems. Additionally, the paper discusses balancing trust calibration with transparency and, offering insights into design choices that mitigate the cobra effect. Finally we discuss the future of transparency in trust calibrations and provide some recommendations for design.
Jonathan Skaggs, Michael Richards, Melissa Morris, Michael Goodrich, and Jacob Crandall.
ABSTRACT: Understanding trust dynamics in mixed-motive multihumanmultirobot (M4) societies is critical for effective human-robot interaction. This paper introduces a test-bed for investigating trust within these complex social structures. We explore relationship trust, (including reputational and positional trust), offering a framework for analyzing trust dynamics. Through a detailed exploration of the JHG, involving human participants, we demonstrate the interplay between these trust aspects and their impact on social cohesion and conflict. Our findings reveal the nuanced role of relational trust in shaping interactions and outcomes in M4 societies. This work, highlights the limitations of traditional trust models and presents an avenue for further research in trust in human-robot collaboration.
Nicholas Conlon, Daniel Szafir, and Nisar R. Ahmed.
ABSTRACT: Appropriate trust is critical to successful human-robot interaction and human-robot teaming. Decades of research has shown that miscalibrated trust can lead to disuse, misuse, or abuse of automation and autonomy. Robotic competency self-assessments enable intelligent robots to perform introspection to qualify their capabilities and limitations, and communicate that information to human partners. We believe that robot competency self-assessment and communication is a key enabler for trust calibration within a human-robot team. Competency self-assessments communicated to human users have been shown to influence trust and decision-making, with downstream impacts to team and task performance. Additionally, these assessments can occur before, during, and after an interaction or mission, which make them well suited to calibrating trust as it may change in dynamic and uncertain environments.