More information coming soon!
As AI systems are increasingly adopted in high-stakes domains such as healthcare, autonomous driving, and criminal justice, their failures may threaten human safety and rights.
Human oversight of AI systems is critically important as a potential safeguard to prevent harmful consequences in high-risk AI applications.
Although regulations like the European AI Act mandate human oversight for high-risk AI, we lack methodologies and conceptual clarity to implement it effectively. Independent of policy and regulation, poorly designed oversight can create dangerous illusions of safety while obscuring accountability.
This interdisciplinary workshop aims to bring together researchers from various disciplines, including AI, HCI, psychology, law, and policy, to address this critical gap. We will explore the following questions:
How can we design AI systems that enable meaningful human oversight?
What methods effectively communicate system states and risks to human overseers?
How do we ensure scalable and effective interventions?
Through papers, talks, and interactive group discussions, participants will identify oversight challenges, examine stakeholder roles, discuss supporting tools, methods, and regulatory frameworks, and establish a collaborative research agenda.
Our central goal is to further a roadmap that enables effective human oversight for the responsible deployment of AI in society.