Human Alignment in AI Decision-Making Systems:
An Inter-disciplinary Approach towards Trustworthy AI
IEEE CAI 2025 Workshop
Santa Clara, California, USA
May 5-7, 2025
About the workshop
Scope and Aims
Over the last twenty years, the use of AI in low- and high-stakes decision-making systems has increased dramatically. This widespread adoption has led to present-day concerns around AI misalignment. Many turn to human alignment of AI systems to address these concerns, aiming to reduce undesirable or harmful behaviors in AI and make them more aligned with human values and characteristics. Although there is no universally accepted definition of, or approach to, alignment, it is broadly accepted as the goal and process of making AIs behave in line with human intentions and values.
The goal of the proposed workshop is to present engaging and high-impact challenges to the community by formulating and discussing key fundamental questions from the perspectives of computer science, artificial intelligence, psychology, and the broader social sciences. Our workshop will address several important questions that have been raised in the research community, for example, is it possible to align AI to human values? If so, which values and/or attributes should we be aligning to? How does alignment work across and between individuals and levels of an organization? How can we align AI to humans in ways that are more succinct and more reliable than currently found in AI literature? How might human alignment increase human’s likelihood to trust and/or delegate responsibility to AI? Why or why not? If so, how do we get there? What are the ethical, legal, and societal implications of human alignment?
Content and Objectives
The workshop will solicit contributions in the form of paper submissions and invited talks covering whether, and if so, how, human aligned AI will lead to more trustworthy and better performing AI systems, including, but not limited to:
Methods and Theory
Theory of alignment
Arguments/positions for and against human aligned AI
Models that characterize and quantify human decision-making and values for decision-making performance and trust in AI
Models designed from trusted decision-makers
Analysis of human decision-making processes and frameworks and relation to trust in other people/AI and performance
Metrics for associating alignment with trust
Metrics or quantification of AI alignment
Application
Applied research use cases
Ethics, risk, and fairness of human aligned AI decision-makers
Laws, regulation, and policy for human aligned AI decision-makers
The call for papers can be found here.
Reach out to davidchan@berkeley.edu or ewartdevisser@gmail.com if you have any questions about the workshop.
2025 IEEE Conference on Artificial Intelligence (IEEE CAI) Website: https://cai.ieee.org/2025/
Organizing Committee
Matt Molineaux
Parallax Advanced Research
Matthew.Molineaux@parallaxresearch.org
David Chan, Ph.D.
Berkeley Artificial Intelligence Research
UC Berkeley
davidchan@berkeley.edu
Ewart de Visser, Ph.D.
Warfigher Effectiveness Research Center
U.S. Air Force Academy
ewartdevisser@gmail.com
Program Committee
Neil Shortland, Ph.D. - UMass Lowell
Amy Summerville - Kairos Research
Jennifer McVay - CACI, Inc
Ritwik Gupta - UC Berkeley
Brian Hu - Kitware
Christopher Funk - Kitware