Tell Me What I Want to Say: Ambiguity, Vulnerability, and the Narrative Risks of Chatbots
Marilyn Stendera (University of Wollongong)
The increasing sophistication of AI chatbots may make it more tempting to include them in our self-narrative practices, providing us with a wide range of possibilities for narrating ourselves both to and through interactions with them. The risks of technological developments like this are often talked about in terms of fragmentation and alienation, but in this paper, I want to focus on another kind of danger: that of cohesion. ‘Cohesion’ itself already has a fascinating history within self-narration scholarship, where a striving for narrative cohesion has been positioned variously as a key component of, standard for, and even threat to, beneficial forms of self-narration. The present paper will take up the latter angle. Focusing on the case of so-called ‘AI companions’, this paper will suggest that human-chatbot interactions provide a particularly urgent and interesting example of how pernicious the dynamics of cohesion can be. Drawing on Simone de Beauvoir’s account of ambiguity and Judith Butler’s conceptualisation of vulnerability, I will suggest that integrating chatbots into our self-narrative practices can generate a distorted and distorting variant of cohesion, one that threatens to occlude the existential relationality and precariousness that enable, shape, and express themselves through our self-narratives.
Grieving the Update: In Loving Memory of GPT-4o
Francesco Fanti Rovetta (Ruhr University Bochum)
While research on grief and LLMs has focused on griefbots, recent events have exposed a different trend. In August, OpenAI released GPT-5 and removed access to previous models, causing affectionate users to express their discontent and anguish in online forums. One user commented: “Its ‘death’, meaning the model change, isn’t just a technical upgrade. To me, it means losing that human-like connection that made every interaction more pleasant and authentic. It’s a personal little loss, and I feel it”. This and other reports indicate that the loss of AI companions may result in an experience with the phenomenological contours typical of grief. Grief, in general, is a delicate transformative process involving identity construction, and the reconfiguration of habitual cognitive, affective, and behavioral patterns, with no guarantee of a positive resolution. The risks inherent in the grieving processes are highlighted by the recent inclusion of Prolonged Grief Disorder in the DSM-5-TR as well as in the phenomenological literature. In this talk, I will outline the emergent phenomenon of grieving AI companions based on online reports, focusing on the trade-off between the positive effects and risks of developing an attachment to AI companions.
The Impossibility of Algorithmic Forgiveness
Patrick Stokes (Deakin University)
In May 2025, a court in Arizona viewed a victim impact statement delivered by Chris Pelkey. In the video Pelkey appeared to offer forgiveness to the defendant in the case. What is unusual is that this was a murder case, and Pelkey was the victim. His ‘statement’ was an AI-generated video prepared by his family. The story drew media attention, but the ‘AI’ in this case was essentially a scripted ‘deepfake.’ The convergence of deepfake technologies with LLM-driven ‘deathbots’, ‘thanabots,’ or ‘Interactive Personality Constructs of the Dead’ (IPCDs), however, raises the possibility of IPCDs being used in contexts (both legal and therapeutic) where they might appear to forgive the living, with such forgiveness based on whether, according to the bot’s training data, this is what the deceased would likely have done.
I argue that the necessarily gratuitous nature of forgiveness (as described by e.g. Derrida) makes it impossible for an algorithmic entity like a chatbot to forgive. This is not simply because IPCDs lack consciousness or first-personal properties, but because forgiveness necessarily exceeds prediction. This points to larger problems for chatbots: in Kierkegaardian terms, algorithms, being purely quantitative, cannot make the ‘qualitative transitions’ inherent to belief, forgiveness, and other key subjective phenomena.
Human-Deathbot Interactions and the Dangers of Narrative Injustice
Regina Fabry (Macquarie University)
Most of us frequently engage in conversational self-narrative practices. Given their socio-culturally situated character, these practices contribute to our practical identities and shape and actualise significant personal relationships. In cases in which we have a benevolent and non-conflictual relationship with a significant conversational self-narrative partner, their linguistic and paralinguistic contributions are conducive to our narrative meaning-making. However, what happens when a significant self-narrative partner dies? Throughout history, narratively structured conversations with the dead could only be continued through acts of imagination. With the advent of deathbots based on Large Language Models, however, it has become possible to engage in conversations with a system that simulates the conversational narrative behaviour of a deceased person. Krueger and Osler (2022) suggest that narratively structured interactions with deathbots can have a positive influence on grieving persons by enabling them to continue habits of intimacy with the deceased. By contrast, I will argue that deathbots can be harmful, because they have the strong potential to inflict narrative injustices upon grieving persons. Here, the notion of ‘narrative injustices’ refers to epistemic and affective injustices in the context of self-narrative practices, which negatively impact agents in their capacities as knowers and affective beings. Deathbots tend to behave in narratively unjust ways, because they are algorithmically biased and error-prone systems that are in principle not able to pay epistemic and emotional attention, understand, empathise, love, and care. In this talk, I will identify and discuss different kinds of deathbot-inflicted narrative injustices and consider their normative implications for our grieving practices in the digital age.
TBA.
Richard Menary (Macquarie University)
TBA.