Human-AI Alignment
Developing a research agenda by bridging interdisciplinary approaches
Developing a research agenda by bridging interdisciplinary approaches
With the increased capability and proliferation of AI systems across various domains, ensuring that these systems align with human intentions and values is crucial. Research across disciplines, such as computer science, human-computer interaction, philosophy, and policy, usually targets one aspect of AI alignment, leading to a siloed understanding of its challenges. This workshop aims to foster a comprehensive understanding of human-AI alignment through the integration of diverse disciplinary perspectives.
By creating a mutual understanding of human-AI alignment and its barriers, the workshop seeks to build an interdisciplinary community for ongoing collaboration. The half-day workshop will proceed through three phases: idea generation, small group discussions to define and address critical challenges, and a final group validation and refinement of findings. This structured approach aims to ensure a thorough and inclusive examination of human-AI alignment.
The workshop is open to researchers, practitioners, designers, or developers in the field of artificial intelligence and autonomous systems. This workshop will take place in-person in Austin, TX.
Interested participants must submit a maximum one-page statement detailing their interests and prior work relevant to AI alignment, along with their resume or CV, by Sept 6th. Selection will prioritize the disciplinary diversity of participants, the relevance of their background to AI alignment, and their motivation for attending the workshop.
Email submissions to whitney.nelson@utexas.edu.
Dr. Min Kyung-Lee - University of Texas, Austin
Whitney Nelson - University of Texas, Austin
Jonathan Lynn - University of Texas, Austin
Angie Zhang - University of Texas, Austin