Objectives and Scope
Artificial Intelligence (AI) is increasingly embedded in safety-critical domains such as transportation, healthcare, and industrial automation. These systems promise enhanced performance, improved prediction, and advanced decision support, yet they also shift human roles from direct control to supervisory oversight. In this supervisory configuration, overreliance may emerge as a systemic vulnerability. Overreliance arises when operators uncritically accept AI outputs, reduce monitoring, or overly defer to automation. Such dynamics can erode expertise and weaken intervention capacity, creating serious risks even when AI operates within its limits. Although the EU Artificial Intelligence Act mandates meaningful human oversight, implementing it remains challenging due to limited guidance on reliance risks. The goal of ORCAS 2026 is to:
Explore technological, human, organizational, legal, ethical, and commercial dimensions of AI overreliance.
Investigate assurance strategies and mitigation mechanisms for preventing harmful reliance patterns.
Facilitate interdisciplinary dialogue within the SafeComp community on sustainable human–AI collaboration.
The important dates are:
Paper submission: May 4/2026
Notification of acceptance: May 18/2026
Camera-ready submission: June 8/2026
Workshop: September 22/2026