STIMULATING COGNITIVE ENGAGEMENT IN HYBRID
DECISION-MAKING
Friction, Reliance and Biases
June 11th, 2024, 2pm-6pm
Malmö University, Niagara building, Floor 3, Room A0311
Topics & Issues
This workshop is intended to be the first of its kind in its discussion of Frictional AI, a novel concept that draws from several intuitions and reflections from the Hybrid Human-AI Interaction research community.
This workshop critically examines the trend of pursuing increasingly rapid and effortless interaction with AI, challenging the traditional view that human over-reliance on AI stems solely from inherent and unavoidable cognitive biases. Instead, we highlight the crucial role of designers and programmers in fostering user empowerment, skill enhancement, and responsibility.
Our goal is to explore and develop strategies that encourage more thoughtful, informed interactions between humans and AI through what we termed ‘Frictional AI'.
Our approach advocates for a thoughtful balance in Human-AI interaction, harmonizing operational efficiency with the necessity for effective, ethical human knowledge work. At the heart of our discourse is the notion of ‘programmed inefficiencies' or ‘frictional protocols' in AI systems. These are intentionally integrated to engage users cognitively, fostering interactions that are mindful, even if they might be slower.
Call for Abstracts
Submission Deadline: April 15, 2024 (23:59 AoE)
Author Notification: May 2, 2024
We welcome a diverse range of contributions, spanning innovative design principles that strike a balance between efficiency and cognitive engagement, to methodologies for assessing and reducing both over-reliance and under-reliance on AI systems.
Abstracts should be between 500 and 1,000 words long, not including References. Preferably use IOS Press templates (Word, LaTeX/Overleaf).
Topics include, but are not limited to:
Novel Design Principles for Cognitive Engagement: Proposing a balance between efficiency and cognitive engagement in AI design.
Measurement and Mitigation Strategies for Over-reliance and Under-reliance: Introducing frameworks to evaluate AI's impact on human judgment through novel metrics and/or proposals to mitigate risks like automation bias and deskilling.
Calibration of Appropriate Trust in AI: Investigating the promotion of appropriate levels of trust in AI models by users, for example scrutinizing how transparency can both aid and hinder trust calibration.
Governance solutions for Cognitively Engaging AI Design: Discussing policy and governance approaches to promote cognitive engagement and frictional principles in AI, fostering responsible and ethical AI development.
Applications and Case Studies: Works documenting and demonstrating practical applications of seamful/frictional principles in various settings, supported by user studies and/or open-source tools.
A collected volume of HHAI 2024 Workshop and Tutorial proceedings will be compiled under the CEUR-WS umbrella after the conference.
How to Submit Your Extended Abstract
Preferred: Send an email to
CHIARA . NATALI [at] UNIMIB . IT
with object “[FRICTIONAL SUBMISSION] Author(s) Name” with attached the Extended Abstract in PDF format.
Otherwise, use the dedicated Easychair page: https://easychair.org/conferences/?conf=frictionalai2024
Workshop Organizers
Brett M. Frischmann (Villanova University, USA, Law, Business and Economics)
Federico Cabitza (University of Milano-Bicocca, IRCCS Galeazzi Sant'Ambrogio Hospital, Italy, Human-AI Interaction)
Chiara Natali (University of Milano-Bicocca, Italy, Human-AI Interaction)
Programme Committee
Submission are evaluated by a multidisciplinary panel of experts.
Noah Apthorpe (Colgate University, USA, Computer Science)
Andrea Campagner (IRCCS Galeazzi Sant'Ambrogio Hospital, Italy, Artificial Intelligence)
Marta E. Cecchinato (Northumbria University, UK, Human-Computer Interaction)
Paolo Cherubini (University of Pavia, Italy, Psychology)
Lewis L. Chuang (Chemnitz University of Technology, Germany, Neuroscience)
Davide Ciucci (University of Milano-Bicocca, Italy, Computer Science)
Vincenzo Crupi (University of Turin, Italy, Philosophy)
Diletta Huyskes (University of Milan, Italy, Sociology)
Jo Iacovides (University of York, UK, Human-Computer Interaction)
Sarah Inman (Google, USA, Human-Centered Design)
Tomáš Kliegr (Prague University of Economics, Czechia, Informatics)
Tim Miller (University of Queensland, Australia, Artificial Intelligence)
Enea Parimbelli (University of Pavia, Italy, Engineering)
Sarah Michele Rajtmajer (Pennsylvania State University, USA, Computer Science)
Carlo Reverberi (University of Milano-Bicocca, Italy, Psychology)
David Ribes (Univerity of Washington, USA, Sociology)
Scott Robbins (University of Bonn, Germany, Ethics of AI)
Evan Selinger (Rochester Institute of Technology, USA, Philosophy)
Yan Shvartzshnaider (York University, Canada, Computer Science)
Alberto Termine (IDSIA USI-SUPSI, Switzerland, Artificial Intelligence)
Niels Van Berkel (Aalborg University, Denmark, Human-Centred Computing)