Bi-directionality in Human-AI Collaborative Systems
Stanford University, Stanford, CA, USA, March 25-27, 2024
Bi-directionality in Human-AI Collaborative Systems
Stanford University, Stanford, CA, USA, March 25-27, 2024
We invite submissions for two tracks:
Human-Centered Computing in the Age of AI
Risk Perceptions and Determinations In Collaborative Human and AI-Based Systems.
Computational intelligence has become more relevant than ever in societies. From big data to LLMs and generative AI, computational intelligence has been the driving force underpinning technological development and application. Along with the excitement that comes about with the big societal impact, computer scientists are often challenged to answer socio-ethical questions, e.g., how to safeguard generative AI. These questions typically concern the complex interplay between social desirability, design methods, and technical feasibility. They are difficult to answer for computer scientists in part due to the lack of training, the different languages or mindsets, and in part due to the abstract, broad, and intricate nature of relevant concepts subject to interpretations.
The goal of this track is to bring together computer scientists, designers, philosophers, and practitioners working on AI, ethics, and human-computer interaction to share and clarify perspectives around the responsible application of computational solutions and discuss methodologies for the conceptualization and operationalization of societal and human values in computational contexts. The track aims to create a supportive platform for discussions and explorations that are expected to ultimately contribute to the development of human-centered computing, with a set of desiderata taxonomies representing 1) the hierarchies of societal and human values, 2) the relevant human-centered design methods, and 3) the computational requirements that can be translated into engineering in practice.
Read more about this track in the Authors page.
Building on research for interdependent human-machine teams we presented at the annual Association for Human Factors and Ergonomics (AHFE) conference in 2023, in this AAAI Symposium, we focus on the perceptions and determinations of risks for human-machine teams (Lawless, 2022a). We not only want to see these wide ranging topics better defined, established, and understood from across the diverse field of AI (including recent advances in Generative AI), but also their implications for Systems Engineering, social, ethical as well as legal perspectives by recognizing the differences between low-risk and high-risk situations; e.g., low risk situations might include verbal and non-verbal communications among teammates performing intricate teamwork that depends on a shared, conscious, bidirectional recognition (e.g., Sliwa, 2021; Butlin et al., 2023) to perform these tasks; in contrast, high-risk situations might include human-machine teams performing the same tasks but with suboptimal awareness of the context created by their inability to substantiate each other’s awareness (Lawless et al., 2019). How risky situations and risk perceptions are to be determined is crucial; i.e., by behavior, or by communication; and what does it mean in a team to be aware of what a teammate knows (Butlin et al., 2023)? What considerations should be given to ethical and legal issues in low-risk versus high-risk situations?
Read more about this track in the Authors page.
All times are in Anywhere on Earth (AoE) time zone.
Abstract registration deadline: Dec. 15th, 2023 Jan. 8th, 2024
Submission deadline: Dec. 22nd, 2023 Jan. 12th, 2024
Notifications to authors: Jan. 5th, 2024 Jan. 26th, 2024
Camera-ready due: Jan. 19th, 2024 Feb. 2nd, 2024
Mar. 13th, 2024: The program is online!
Dec. 21st, 2023: Submission deadlines extended.
Nov. 17th, 2023: Website is online!