STIMULATING COGNITIVE ENGAGEMENT IN HYBRID DECISION-MAKING

Friction, Reliance and Biases

(Second Edition)

Co-located with HHAI 2025 - Fourth International Conference
on Hybrid Human-Artificial Intelligence 

June 10, 2025, Pisa, Italy
9AM - 6PM | Piazza dei Cavalieri, 7, 56126 Pisa PI, Italy


Topics & Issues

In its second edition, this workshop builds on and expands its exploration of friction-in-design in AI systems, hosting a full-day event that challenges the pursuit of seamless, rapid interactions. In contrast to the conventional narrative that human over-reliance on AI stems solely from cognitive biases, we emphasize the critical role of designers and developers in fostering user empowerment, skill retention, and ethical responsibility. 

This approach advocates for a balanced perspective on Human-AI interaction: one that harmonizes operational efficiency with the demands of meaningful, mindful, and effective human knowledge work. 

Central to our discussion is the concept of ‘friction-in-design’ or ‘frictional protocols’ in AI systems, which are deliberate design choices that introduce moments of reflection and cognitive engagement, even at the expense of speed. The main aim of these protocols is reducing risks of over-reliance on AI. 

The workshop will feature keynote presentations by leading experts alongside participant contributions, fostering interdisciplinary dialogue on design principles, methodologies for addressing reliance issues, and strategies for ensuring thoughtful and responsible AI interactions. 

Participants will explore the potential of ‘Frictional AI’ to collaboratively shape future research and practices that ensure AI use remains meaningful and responsible.


Keynote Speakers

Federico Cabitza (BSc, MEng, PhD) is an Associate Professor at the University of Milano-Bicocca, where he leads the Modeling Uncertainty, Decisions, and Interactions Laboratory (MUDILab) and teaches courses in human-computer interaction and decision support. He has extensively collaborated with hospitals in Milan, including the IRCCS Hospital Galeazzi and Sant’Ambrogio, where he co-founded the Medical AI Laboratory. His research focuses on the design and evaluation of AI systems for decision-making, particularly in healthcare, and their impact on organizations and user workflows. Author of over 150 publications in international conference proceedings, Prof. Cabitza has co-chaired international workshops and conference tracks and is listed among Stanford’s Top 2% Scientists. He is also co-author, with Luciano Floridi, of the book Artificial Intelligence: The Use of the New Machines (Bompiani).

Bart Van Leeuwen (MEng) is an expert on human factors, situational awareness, and high-stakes decision-making in high-stress, high-risk work environments. With over 30 years of experience as a firefighter and captain in Dutch fire departments, he leverages his frontline expertise to train organizations through SAMatters! (Situational Awareness Matters!), strengthening proactive decision-making across lead- ership and field operations. As founder of Netage B.V., he brings decades of IT expertise, developing global cloud-based solutions and contributing to W3C standards for data interoperability. He holds a guest position at the VU Human-Centric Data Science group (Vrije Universiteit Amsterdam), where he is a regular lecturer on the subject of data in the fire service. His innovative work on situational awareness earned Best Short Paper at the first edition of the Hybrid Human Artificial Intelligence Conference (HHAI2022).


Call for Extended Abstracts (500-1,000 words)

Submission Deadline:  April 4, 2025 AoE

Author Notification: May 2, 2025

Workshop (Pisa, Italy): June 10, 2025


Contributions are invited on topics including, but not limited to:

• Novel Design Principles for Cognitive Engagement: Proposing a balance between efficiency and cognitive engagement in AI design.

• Measurement and Mitigation Strategies for Over-reliance and Under-reliance: Introducing frameworks to evaluate AI’s impact on human judgment through novel metrics and/or proposals to mitigate risks like automation bias and deskilling.

• Calibration of Appropriate Trust in AI: Investigating the promotion of appropriate levels of trust in AI models by users, for example scrutinizing how transparency can both aid and hinder trust calibration.

• Governance solutions for Cognitively Engaging AI Design: Discussing policy and governance approaches to promote cognitive engagement and frictional principles in AI, fostering responsible and ethical AI development.

• Applications and Case Studies: Works documenting and demonstrating practical applications of seamful/frictional principles in various settings, supported by user studies and/or open-source tools.

A collected volume of HHAI 2025 Workshop and Tutorial proceedings will be compiled under the CEUR-WS umbrella after the conference.


Author Guide

Abstract Length: 500–1,000 words (this limit does not apply to the References section)

Review Process: Single-blind—authors should include their name(s) and affiliation(s).

Format: Submissions should preferably follow the CEUR-WS single-column format. This format will be required for the camera-ready version after the workshop.
Ready-to-use templates are available:

Submission Portal: https://cmt3.research.microsoft.com/FrictionalAIWorkshop2025

Workshop Organizers

Programme Committee

...And more TBA!