Bi-directionality in Human-AI Collaborative Systems 

AAAI Spring Symposium Series

Stanford University, Stanford, CA, USA, March 25-27, 2024

The Symposium addresses the challenges in creating synergistic human and AI-based autonomous system-of-systems. Recent advances in  generative AI techniques such as Large Language Models have exacerbated the growing concerns associated with AI such as the risk, trust, and safety from the use of machines/AI in open situations. These concerns present major hurdles in the development of verified and validated engineered systems involving bi-directional pathways across the human-machine barrier; bi-directionality in this context means understanding the design and operational consequences of the human on the agent, and vice-versa. Current discussions on human-AI interactions are fragmented, focusing either on the impact of AI on human stakeholders (and relevant human factors considerations), or potential ways of involving humans in computational interventions (e.g., data annotation, behavior interpretation).  We believe the challenges associated with humans-AI collaborative systems cannot be adequately addressed if the underlying challenges associated with bi-directionality are not taken into consideration. 

Topics

We are interested in the concepts associated with bi-directionality including, but not limited to:

Format of the Symposium

The symposium involves invited talks, presentations of the accepted papers, panel discussions, and speed talks.

Submission

We welcome submissions either as a research paper (up to 8 pages) or an extended abstract (1-2 pages).

We invite submissions for two tracks:

Human-Centered Computing in the Age of AI

Computational intelligence has become more relevant than ever in societies. From big data to LLMs and generative AI, computational intelligence has been the driving force underpinning technological development and application. Along with the excitement that comes about with the big societal impact, computer scientists are often challenged to answer socio-ethical questions, e.g., how to safeguard generative AI. These questions typically concern the complex interplay between social desirability, design methods, and technical feasibility. They are difficult to answer for computer scientists in part due to the lack of training, the different languages or mindsets, and in part due to the abstract, broad, and intricate nature of relevant concepts subject to interpretations.

The goal of this track is to bring together computer scientists, designers, philosophers, and practitioners working on AI, ethics, and human-computer interaction to share and clarify perspectives around the responsible application of computational solutions and discuss methodologies for the conceptualization and operationalization of societal and human values in computational contexts. The track aims to create a supportive platform for discussions and explorations that are expected to ultimately contribute to the development of human-centered computing, with a set of desiderata taxonomies representing 1) the hierarchies of societal and human values, 2) the relevant human-centered design methods, and 3) the computational requirements that can be translated into engineering in practice.


Read more about this track in the Authors page.

Risk Perceptions and Determinations In Collaborative Human and AI-Based Systems

Building on research for interdependent human-machine teams we presented at the annual Association for Human Factors and Ergonomics (AHFE) conference in 2023, in this AAAI Symposium, we focus on the perceptions and determinations of risks for human-machine teams (Lawless, 2022a). We not only want to see these wide ranging topics better defined, established, and understood from across the diverse field of AI (including recent advances in Generative AI), but also their implications for Systems Engineering, social, ethical as well as legal perspectives by recognizing the differences between low-risk and high-risk situations; e.g., low risk situations might include verbal and non-verbal communications among teammates performing intricate teamwork that depends on a shared, conscious, bidirectional recognition (e.g., Sliwa, 2021; Butlin et al., 2023) to perform these tasks; in contrast, high-risk situations might include human-machine teams performing the same tasks but with suboptimal awareness of the context created by their inability to substantiate each other’s awareness (Lawless et al., 2019). How risky situations and risk perceptions are to be determined is crucial; i.e., by behavior, or by communication; and what does it mean in a team to be aware of what a teammate knows (Butlin et al., 2023)? What considerations should be given to ethical and legal issues in low-risk versus high-risk situations?


Read more about this track in the Authors page.

Authors should follow the formatting guidelines in the AAAI-24 Author Kit and submit through EasyChair.

Important Dates

All times are in Anywhere on Earth (AoE) time zone.

Announcements