Authors

We welcome submissions either as a research paper (up to 8 pages) or an extended abstract (1-2 pages).

We invite submissions for two tracks: 

Human-Centered Computing in the Age of AI

Computational intelligence has become more relevant than ever in societies. From big data to LLMs and generative AI, computational intelligence has been the driving force underpinning technological development and application. Along with the excitement that comes about with the big societal impact, computer scientists are often challenged to answer socio-ethical questions, e.g., how to safeguard generative AI. These questions typically concern the complex interplay between social desirability, design methods, and technical feasibility. They are difficult to answer for computer scientists in part due to the lack of training, the different languages or mindsets, and in part due to the abstract, broad, and intricate nature of relevant concepts subject to interpretations.

Relevant attempts have been made from adjacent fields, e.g., design-for-values highlights the ethical principles and their translation to concrete system dispositions and organizational or social processes, and human-centered design stresses the central role of stakeholders situated in social contexts in the design, development, and use of technology. Despite their strong positioning, those ideas encompass many principles, concepts, and approaches that are not necessarily straightforward or even feasible to implement in the context of computational intelligence.

The goal of this track is to bring together computer scientists, designers, philosophers, and practitioners working on AI, ethics, and human-computer interaction to share and clarify perspectives around the responsible application of computational solutions and discuss methodologies for the conceptualization and operationalization of societal and human values in computational contexts. The track aims to create a supportive platform for discussions and explorations that are expected to ultimately contribute to the development of human-centered computing, with a set of desiderata taxonomies representing 1) the hierarchies of societal and human values, 2) the relevant human-centered design methods, and 3) the computational requirements that can be translated into engineering in practice.


References

Risk Perceptions and Determinations In Collaborative Human and AI-Based Systems

Building on research for interdependent human-machine teams we presented at the annual Association for Human Factors and Ergonomics (AHFE) conference in 2023, in this AAAI Symposium, we focus on the perceptions and determinations of risks for human-machine teams (Lawless, 2022a). We not only want to see these wide ranging topics better defined, established, and understood from across the diverse field of AI (including recent advances in Generative AI), but also their implications for Systems Engineering, social, ethical as well as legal perspectives by recognizing the differences between low-risk and high-risk situations; e.g., low risk situations might include verbal and non-verbal communications among teammates performing intricate teamwork that depends on a shared, conscious, bidirectional recognition (e.g., Sliwa, 2021; Butlin et al., 2023) to perform these tasks; in contrast, high-risk situations might include human-machine teams performing the same tasks but with suboptimal awareness of the context created by their inability to substantiate each other’s awareness (Lawless et al., 2019). How risky situations and risk perceptions are to be determined is crucial; i.e., by behavior, or by communication; and what does it mean in a team to be aware of what a teammate knows (Butlin et al., 2023)? What considerations should be given to ethical and legal issues in low-risk versus high-risk situations?

In today’s risky, rapidly evolving situations, are risk perceptions by autonomous human-machine teams to be treated unidirectionally or bidirectionally? Specifically, are human and machine teammates in servile roles only, or should bi-directionality govern, too? For example, if an F-35 senses that its pilot has passed out from a high-g maneuver, the plane takes over until the pilot recovers (Lang, 2021). But can we extend and generalize an autonomous machine’s takeover for the even riskier situation where a copilot is committing suicide (viz., the Germanwings commercial airplane in 2015, killing all aboard); or when the pilot becomes dysfunctional for whatever reason (e.g., an F-35 recently abandoned by its pilot after his ejection, the plane continued to fly on for 60 miles, an unguided missile; in Copp & Pollard, 2023)? With lives and expensive systems at risk, how are we to address bidirectional risks in these new environments? Today, insurers are trying to estimate risks against mistakes by AI; against AI models that fail, or fail to work as predicted; against financial losses from generative AI; against copyright and privacy rights infringement; but, overall, impeded by this new complexity makes it “hard for insurers to assess risk” in their search for reliable statistical estimators (Lin, 2023). Moreover, if risk perceptions are different from risk determinations, when and where possible, how are these differences to be reconciled? Specifically, if an “aware” commercial airliner concludes that its copilot is committing suicide (maybe conferring with ground control), should we humans (legally, ethically, politically) authorize the plane to safe itself by cutting off the copilot’s controls and assuming command (Sofge et al., 2019)? Or in the case of the extraordinarily expensive F-35 that continued to fly as an unguided missile (Copp & Pollard, 2023), using technology available today, should we allow the plane to safe itself (again, maybe while conferring with ground control), declare an emergency, contact the nearest airport, and proceed to land once given oral permission?

These considerations created by the interdependence of task skills, recognizing conscious awareness, shared communications, and intricate teamwork arise because how individuals in human teams fit into a unit is not only unknown, but, according to the National Academy of Sciences, possibly unknowable: The “performance of a team is not decomposable to, or an aggregation of, individual performances” (Endsley, 2021, p. 11). This extraordinary claim by the Academy not only supports our research on the structure and performance of interdependent human-machine teams (Lawless et al., 2023b), but maximum interdependence (Endsley, 2021; Cummings, 2015) indicates a loss of Shannon information, recognized by Shannon himself (Young, 2004), that makes advancing the science of teams hinge on a new models of information that include not only interdependence and its effects, but also the need to include the embodied (interdependent) communication entailed in teamwork (Lawless et al., 2023b; Sliwa, 2021).


References

Writing Guidelines

Authors should follow the formatting guidelines in the AAAI-24 Author Kit.

Submission Guidelines

All submissions must be submitted through EasyChair.

Reviewing Process

The invitation of contributors and presenters will be based on a rigorous single-blinded review of submitted papers.

After an acceptance notification, at least one author of each accepted submission must be registered for this symposium and present the paper at the symposium.

Reviewers' Comments

We encourage authors to address the reviewer’s comments the best possible. In doing so, authors should not add additional content that would entail further review.

Camera-ready Version

Upon acceptance, authors should expect an invitation to submit their "camera-ready" version of their submission. Detailed instructions will be included in that email.

Proceedings

Accepted papers shall be published as part of the “Proceedings of the AAAI Symposium Series” by the AAAI Library. The proceedings will be available after the conference.

Authors can choose their papers to be not included in the proceedings, by contacting the organisers of the symposium.

Important Dates

All times are in Anywhere on Earth (AoE) time zone.