Schedule

Program

13:10 - 13:20 (6:10 - 6:20 CEST )     OPENING Session 

13:20 - 13:55  (6:20 - 6:55 CEST )   Invited Talk: Prof. Hae Won Park

Title: Personalized Interaction Policies - Engaging our cognitive and affective states with social robot partners and building relationships


Abstract: In this talk, I will highlight a number of provocative research findings from our recent long-term deployment of social agents in homes, schools, and living communities engaging families, young children, and older adults. We employ an affective reinforcement learning approach to personalize the agent’s actions to modulate users' engagement and maximize the interaction benefit. Our results show that the interaction with an AI companion influences users’ beliefs, learning, and how they interact with others. The affective personalization boosts these effects and helps sustain long-term engagement. During our deployment studies, we observed that people treat and interact with artificial agents as social partners and catalysts. We also learned that the effect of the interaction strongly correlates to the social relational bonding users built with the agent. Now, when designing longterm social AI partners, how should such relational dimensions come into play?


13:55 - 14:10    (6:55 - 7:10 CEST ) Paper 1: Game Theory to Study Cooperation in Human-Robot Mixed Groups: Exploring the Potential of the Public Good Game ( Giulia Pusceddu, Sara Mongile, Francesco Rea and Alessandra Sciutti)

Abstract: In this study, we explore the potential of Game Theory as a means to investigate cooperation and trust in human-robot mixed groups. Particularly, we introduce the Public Good Game (PGG), a model highlighting the tension between individual self-interest and collective well-being. In this work, we present a modified version of the PGG, where three human participants engage in the game with the humanoid robot iCub to assess whether various robot game strategies (e.g., always cooperate, always free ride, and tit-fortat) can influence the participants’ inclination to cooperate. We test our setup during a pilot study with nineteen participants. A preliminary analysis indicates that participants prefer not to invest their money in the common pool, despite they perceive the robot as generous.

By conducting this research, we seek to gain valuable insights into the role that robots can play in promoting trust and cohesion during human-robot interactions within group contexts. The results of this study may hold considerable potential for developing social robots capable of fostering trust and cooperation within mixed human-robot groups.


14:10-14:25    (7:10 - 7:25 CEST ) Paper 2: Enriching Telepresence Robots with Adaptive AI Services for Active Ageing Support: A Feasibility Study (Gloria Beraldo, Riccardo De Benedictis, Amedeo Cesta, Francesca Fracasso, Gabriella Cortellessa)

Abstract: The growing elderly population necessitates urgent solutions for bridging the gap between continuous care in domestic settings and the associated costs. Addressing this challenge, telepresence robots emerge as a promising avenue, facilitating remote caregiving through video calls and in-home mobility. While telepresence effectively enables distant care, it relies on a remote operator. This work aims to enhance a cost-effective commercial telepresence robot by introducing supplementary services that keep the elderly engaged in the absence of a caregiver. This is achieved through an innovative blend of transformative interfaces and task-planning systems, offering adaptive assistance. This paper presents initial findings from a feasibility test conducted in a domestic environment, encompassing system functionality verification and participants’ feedback collection.

14:25 - 14:40     (7:25 - 7:40 CEST ) Paper 3: Multimodal Interfaces for Emotion Recognition: Models, Challenges and Opportunities (Danilo Greco and Paola Barra)

Abstract: Emotion recognition has emerged as an active research area with applications across human-computer interaction, healthcare, education, gaming, and beyond. While initial work focused on unimodal emotion analysis from visual cues, vocal expressions or physiological signals, unimodal interfaces often lack robustness and generalizability across diverse contexts and users. This has driven interest in multimodal emotion recognition systems that integrate two or more sensory modalities to enable more accurate and universal emotion understanding. This paper provides a comprehensive survey of the advances, opportunities and open challenges in multimodal emotion recognition. We present commonly used modalities including facial expressions, voice, gestures, and physiology along with their emotion-encoding capabilities and limitations. Different fusion approaches for combining modalities are analyzed, highlighting tradeoffs between flexibility, complexity and performance. We discuss applications of multimodal emotion recognition in healthcare, education, gaming, surveillance, and human-robot interaction that demonstrate its advantages over unimodal methods. Finally, we outline key open questions around robustness, temporal dynamics, contextual modelling, and interpretability that present exciting directi


14:40 - 15:15     (7:40 - 08:15 CEST ) Invited Talk: Prof. Emilia Barakova 

Title: Combining Social Robots and Wearables to Promote Positive Affect and Engagement in Assistive Tasks

Abstract: One of the key factors in the success of assistive robots is their ability to engage and connect with people, provide emotional and social support, encourage positive behavior, and improve motivation and engagement. Various user groups such as children with ASD, elderly with dementia, people with intellectual disabilities, and young children in postoperative care struggle to adequately self-report and explain their degrees of discomfort, pain, and worry. To address this, we used interaction design methods and a combination of wearables, robots, and mobile apps to transform social robots into effective tools for promoting pleasant affect, engagement, and distraction from pain and loneliness in assistive tasks. Furthermore, we incorporated contextual aspects (e.g., hospital or care home), the patient/client journey, personal needs, and the involvement of caregivers and parents into our robot therapies.

15:15 - 15:50    (8:15 - 8:50 CEST ) Invited Talk: Prof. Ginevra Castellano

Title: Social-robots for perinatal depression screening: users and experts' views and ethical considerations

Abstract: Perinatal depression (PND) affects as many as 10% of women during pregnancy or after childbirth. It is a serious and potentially life-threatening disorder with high societal costs. Research shows that psychosocial interventions may decrease depressive symptoms for women affected by PND. However, in order to receive treatment, a clinical diagnosis of depression is required. This currently entails a structured clinical interview with a skilled physician. However, access to skilled personnel with training to perform the clinical interviews in primary care can vary substantially, which can lead to long waiting times or an unstructured interview with lower diagnostic accuracy. According to a recent review, up to 69% of PND cases go undetected and only 6% receive adequate treatment.

At the same time, socially assistive robots (SARs) have shown potential in mental healthcare.

In this talk I will present my group's research on how SARs may be used to assist clinicians in screening and diagnosis of PND. Through a set of interview studies with users and experts, I will discuss envisioned requirements for SARs in PND screening, as well as ethical considerations on their roles, capabilities and appearance.

15:50 - 16:00     (8:50 - 9:00 CEST ) CLOSING  Session