ICLR-HAIC 2025
ICLR 2025 Workshop on Human-AI Coevolution
April 27, 2025
Singapore EXPO
ICLR 2025 Workshop on Human-AI Coevolution
April 27, 2025
Singapore EXPO
HAIC 2025, the First Workshop on Human-AI Coevolution, is a one-day workshop located at ICLR 2025 in Singapore which focuses on the emerging field of Human-AI Coevolution (HAIC) to understand the feedback loops that emerge through continuous human-AI coadaptation.
This workshop focuses on new approaches beyond AI performance benchmarks, exploring multiple levels of analysis spanning single human-AI agent collaboration behavior to long term multiple human-AI interaction with impact across social institutions such as healthcare and criminal justice.
Check out our accepted articles and the incredible lineup of amazing speakers and panelists at the workshop; see you in April!
Submission Open: 22 December, 2024
Submission Deadline: 10 February, 2025, 11:59PM, Anywhere on Earth
Notification of Acceptance: 5 March, 2025
Camera-Ready Paper Due: 27 March, 2025, 11:59PM, Anywhere on Earth
Workshop Time: 27 April, 2025 (subject to announcement from ICLR) (Agenda)
Our workshop is broadly interested in the emerging field of Human-AI Coevolution (HAIC)—spanning domains across the practice of developing and measuring coevolution, long term interactions, and practical impacts across social institutions.
In particular, we are interested in work that delve into the following subject areas:
1. Human-AI Interaction and Alignment: human expectations and trust in AI systems, their design principles, and ethical and societal impact.
2. Algorithmic Adaptation and Robustness: techniques across improving algorithmic adaptation and robustness in a human context including enhancements to alignment techniques, technical frameworks for improving AI adaptability to human preferences, and techniques for ensuring generation diversity.
3. Long-Term Societal Impact and Safety: implications and evaluations of HAIC on governance and socio-technological systems, including novel treatments of AI safety in light of dynamic human-AI interactions.
4. Bidirectional Learning Beyond Performance Metrics: changes with evaluation using the lens of HAIC, and effects on humans decision making after long-term interaction with AI.
5. Shaping Collective Behavior and Learning: influences of AI on consensus building, and the role and biases created by AI in AI-mediated environments.
6. Dynamic Feedback Loops in Socially Impactful Domains: real-time influence of AI on specific domains (e.g., healthcare, education, criminal justice), and addressing domain-specific demands of AI interactions (especially in safety-critical or high stakes environments.)
7. Socio-Technological Bias, Norms, and Ethics: analysis of how AI systems perpetuate or mitigate societal biases and shape social norms, while considering the implications for decision making.
Stanford
Stanford
Stanford
Stanford
Stanford
Stanford
UT Austin
UT Austin
Stanford
Stanford