Topics
The workshop invites full papers, short papers, and position papers addressing, but not limited to, the following themes:
Conceptual models of overreliance and misuse in AI-enabled systems.
Mechanisms of supervisory degradation and reduced human monitoring.
Inability to disengage AI recommendations in high-risk environments.
Erosion of human expertise and intervention capacity in AI-mediated operations.
Design approaches for calibrated reliance and meaningful human oversight.
Human-in-the-loop and supervisory control architectures for safety-critical AI.
Methods for identifying and analyzing reliance-related risks in safety assessments.
Interface and interaction strategies to mitigate overdependence.
Integration of overreliance aspects into safety cases and assurance arguments.
The workshop explicitly encourages interdisciplinary contributions bridging safety engineering, human factors, AI governance, and regulatory perspectives.
Intended Audience
The workshop targets:
Researchers in safety, reliability, and AI assurance.
Human factors and cognitive engineering specialists.
AI governance, regulatory scholars.
Industrial practitioners from safety-critical domains.
Certification and standardization experts.
PhD students and early-career researchers working on AI safety.