Putting AI in the Critical Loop:
Assured Trust and Autonomy in Human-Machine Teams
Stanford, CA USA, March 21-23, 2022
Assured Trust and Autonomy in Human-Machine Teams
Stanford, CA USA, March 21-23, 2022
About. There will always be interactions between machines and humans. When the machine has a high level of autonomy and the human-machine relationship is close, there will be underpinning, implicit assumptions about behavior and mutual trust.
The performance of the Human-Machine team will be maximized when a partnership is formed that is based on providing mutual benefits. Designing systems that include human-machine partnerships requires an understanding of the rationale of any such relationship, the balance of control, and the nature of autonomy.
Essential first steps are to understand the nature of human-machine cooperation, to understand synergy, interdependence, and discord within such systems, and to understand the meaning and nature of “collective intelligence.”
The reasons why it can be hard to combine machines and humans, attributable to their distinctively-different characteristics and features, are also central to why they have the potential to work so well together, ideally overcoming each other’s weaknesses.
Across the widest range of applications, these topics remain persistent as a major concern of system design and development. Intimately related to these topics are the issues of human-machine trust and “assured” performance and operation of these complex systems, the focal topics of this year’s proposed Symposium.
Recent discussions on trust emphasize that, with regard to human-machine systems, trust is bidirectional and two-sided (as it is in humans); humans need to trust AI technology but future AI technology at least may need to trust human inputs and guidance as well. In the absence of an adequately high level of autonomy that can be relied upon, substantial operator involvement is required, which not only severely limits operational gains, but creates significant new challenges in the areas of human-machine interaction and mixed initiative control.
The meaning of assured operation of a human-machine system also needs considerable specification; assurance has been approached historically through design processes by following rigorous safety standards in development, and by demonstrating compliance through system testing, but largely in systems of bounded capability and where human roles were similarly bounded. As an example, DARPA’s assured autonomy program is seeking a capability wherein “continual assurance” is sought, where the safety and functional correctness of the system is provided provisionally at design time, but continually monitored, updated, and evaluated at operation-time as the system performs and adapts to its environment.
These intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes of this symposium.
Topics of Interest.
Trust and Emergence in Autonomous Human-Machine Teams
Ethics in Deploying Autonomous Human-Machine Teams
Societal Consequences of Autonomous Human-Machine Teams
Foundations of Autonomy and Collective Intelligence
Engineering Methods for Assured Autonomy
Testing Methods and Metrics for Trust/Bidirectional Trust Assessment
Machine Models of Human Trust Processes
Assessing Anthropomorphic Modeling for Trust and Ethics in AI
Realtime Autonomy Management
Shared Understanding in Human-Machine Systems
Bidirectional Explainability
Resilience in Human-Machine Teams
Democratic Model of AI
Additional Information. The 2020 AAAI Spring Symposium focused on research opportunities at the intersection of Systems Engineering and Artificial Intelligence for both conceptual and applied discussions on: 1) the exploration of the science of how humans and machines operate interdependently in a team, and 2) the effective and efficient engineering of such systems into well-functioning complex systems. Because interdependence in a team creates bi-stable effects(1) among humans that affect the design and performance of human-machine teams, one focal point for the 2020 symposium was a multidisciplinary discussion related to the underlying factors affecting interdependence.
In 2021, the Spring Symposium carried these discussions forward, remaining focused on leveraging Systems Engineering as a methodological foundation for achieving synergistic performance of AI-based System-of-Systems (“Autonomous Human-Machine Teams”, AHMT, henceforth), with the human role persisting in importance. Across the widest range of applications, these themes remain as a major concern of system design and development. Other critical issues are those related to human-machine trust and the aspects of “assured” performance and operation of these complex systems, the focal topics of the proposed 2022 Symposium.
Understanding the mechanisms of trust is difficult and affected by situational context and human factors, but whose understanding is a central factor influencing the interaction between people and AHMT’s. Misinformed representations of trust may cause misuse, abuse, or disuse of the system technology and its operational capability. The degree of system opaqueness or transparency has complicated the understanding and trust in these systems, and has led to the considerable surge in demand for “explainable” systems; see [1]. Recently, discussions on trust with regard to AHMT systems wherein trust is bidirectional and two-sided (as it is in humans) has emerged; humans need to trust AHMT technology but future AHMT technology needs to trust human inputs and guidance as well. This imputes a need to engineer a process into the AHMT system that builds trust; in [2], formalized notions of intrinsic trust, based on the AHMT system’s observable reasoning process, and extrinsic trust, based on the AHMT system’s external behavior, are developed. Notions of vulnerability, risk, and anticipation quickly enter the discussion on bidirectional trust.
Trust is essential in environments where one agent is vulnerable which imposes a measure of risk to that agent; in other words, trust is an attempt to anticipate the impact of behavior under risk where distrust manifests in the failed attempts to mitigate the risk. As the level of autonomy in AHMT systems increase, trust and verification and validation become increasingly important and complex topics to address within a Systems Engineering process to operate a system autonomously in an open environment. The main complicating factors that drive complexity are internal and external situations. Internally, the increase in the levels of uncertainty in advanced autonomous processes need additional controls for assurance (e.g., imperfect information may feed the advanced autonomy and therefore require additional “checks”; see [3, 4]).
Our symposium will also focus on assurance and related topics including Test, Evaluation, Verification and Validation (T&E/V&V) of AHMT. We draw inspiration from [5], which describes the main challenges to T&E/V&V:
1) State-space explosion: autonomous systems typically have very large decision spaces and cannot be exhaustively searched, examined, or tested;
2) Unpredictable environments: AHMT’s may make their own decisions in an environment, thereby producing a cognitive feedback loop that explodes the state space;
3) Unpredictable behavior: Interactions between systems and system factors may induce unintended consequences; unexpected behavior may also result from local interactions between small, seemingly insignificant factors and is hard to predict and manage in open environments; and
4) Human-machine communication: Designing and testing “patterns-of-communication” that produce understandable and repeatable results has been challenging.
Several conclusions from the NSC’s final reports on AI [6] are also applicable to our proposed Symposium. The "ability of computer systems to solve problems" is rapidly improving and world altering; e.g., China is attempting to claim AI leadership; damage for AI cyberattacks is increasing; and the Covid-19 pandemic has been destabilizing. However, the US government has not organized nor invested to win this AI competition with China, causing NSC to propose an integrated strategy for the nation that counters cyber-espionage; defends against AI-enabled threats; and prepares the nation for the possibility of future warfare (p. 9). The risks from AI must be managed; national intelligence must be transformed to accommodate the world-altering effects of AI; digital talent must be promoted; and, of particular relevance to our proposed Symposium, confidence in AI systems must be established. The NSC-AI report recommended this confidence could be realized by building a democratic model of AI for use by national security (p. 11). Indeed, exploiting the bi-stability inherent in the state dependency [8] of interdependence is key to building a democratic model of AI [7].
References
[1] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., et al. (2019). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58 (2020) 82–115
[2] Jacovi, A., et al., Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI, arXiv:2010.07487v3 [cs.AI] 20 Jan 2021
[3] Smithson, M., Chapter 10 Trusted Autonomy Under Uncertainty, in [Hussein A. Abbass, Jason Scholz, Darryn J. Reid, Editors], Foundations of Trusted Autonomy, Studies in Systems, Decision and Control Volume 117, Springer, 2018
[4] Dutta, R.G., Guo,X., and Jin,Y., Quantifying trust in autonomous system under uncertainties, 2016 29th IEEE International System-on-Chip Conference (SOCC), 2016, pp. 362-367,
[5] Department of Defense Research & Engineering Autonomy Community of Interest (COI) Test and Evaluation, Verification and Validation (TEVV) Working Group Technology Investment Strategy 2015‐2018, https://apps.dtic.mil/dtic/tr/fulltext/u2/1010194.pdf
[6] NSCAI (2021). Final Report. National Security Commission on Artificial Intelligence. Retrieved 5/4/2021 from https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf )
[7] Davies, P. (2020), Does new physics lurk inside living matter? Physics Today, 73, 8, 34 https://doi.org/10.1063/PT.3.4546
[8] Wang, M., Arteaga, D. & He, B.J. (2013), Brain mechanisms for simple perception and bistable perception, PNAS, E3350–E3359, www.pnas.org/cgi/doi/10.1073/pnas.1221945110
[9] Eagleman, D.M. (2001), Visual illusions and neurobiology, Nature Reviews Neuroscience. 2 (12): 920–926. doi:10.1038/35104092
[10] Brincat et al. (2021), Interhemispheric transfer of working memories, Neuron, 109(6): 1055-1066.e4
Endnotes.
(1) bi-stable refers to multiple interpretations; e.g., a bi-stable illusion is the faces-vase illusion; see p. E3351 in [8]; the brain only "interprets" one aspect of bi-stability at a time; in [9]; however, the brain apparently stitches together these independent visual memories of hemifields from the left and right brains together, in [10].