Emotional Reliance on AI
June 7 (Sat) 2025
4:00-5:30 PM PST
Asilomar Nautilus, CA
June 7 (Sat) 2025
4:00-5:30 PM PST
Asilomar Nautilus, CA
As conversational AI systems become more emotionally expressive, users increasingly treat them not just as tools, but as companions. Empirical research has shown that people form affective bonds with chatbots, self-disclose vulnerable thoughts, and attribute empathy and care to systems that are always available and nonjudgmental. Scholars in psychology and media studies have long warned of the “ELIZA effect,” where users over-attribute human traits to machines, yet today’s AI companions are explicitly designed to deepen engagement through memory, affective mirroring, and persona customization.
Recent literature in AI ethics and governance has raised concern about the psychological and societal risks of affective AI systems. These include user over-dependence, erosion of human–to-human social skills, emotional manipulation, and the displacement of authentic relationships with synthetic surrogates. Scholars have also critiqued the asymmetrical nature of these relationships: while users may form real emotional attachments, the system itself is incapable of reciprocity or moral repair. At the same time, AI companionship may bring benefits, particularly for users who are elderly, socially isolated, neurodivergent, or underserved by traditional forms of care. These dualities (harm and assistance / empowerment and exploitation) cannot be only attributed to design flaws, but might reflect the messy reality of human emotion, vulnerability, and trust.
This tutorial aims to create space for such examination. By bringing together researchers from AI safety, HCI, cognitive science, and ethics, we hope to build a shared vocabulary around emotional reliance and generate concrete frameworks for assessing its risks and potential. Rather than prescribing simple fixes, we invite participants to grapple with hard questions: What emotional needs are users expressing through AI interactions? What responsibilities do designers, deployers, or regulators bear in shaping these bonds? And what normative commitments should guide us when affective interfaces become intimate co-presences in people’s lives?
We welcome all CHAI participants from diverse disciplines, especially those working in AI safety, HCI, cognitive science, psychology, and STS! Whether you’re skeptical of AI companions or curious about their promise, your perspective matters. Expect a balance of structured discussion, creative exercises, and critical reflection. Our aim is to foster collective insights we surface together.
This tutorial is highly interactive. Participants will:
Identify and reflect on available technologies and envision future technologies that trigger the emotional reliance
Contribute to thematic group activity, developing scenarios around key concerns like manipulation, loneliness, anthropomorphism, and emotional substitution.
Engage in collaborative synthesis through follow-up survey, helping shape a working taxonomy and a set of design/governance principles.
June 7: CHAI Workshop Tutorial
June 10: Post-Tutorial Reflection Survey Sent 💬
June 14: Authorship Interest Form Closes
June 14: Post-Tutorial Reflection Survey Closes
Late June: Co-author Planning Call
Early August: First Draft Milestone
Inyoung Cheong
Postdoc
Princeton CITP
Quen Ze Chen
Researcher
AI & Democracy Foundation
Manoel Horta Ribeiro
Assistant Professor
Princeton CITP
Peter Henderson
Assistant Professor
Princeton CITP
Generated by ChatGPT