Human Aspects in Adaptive and Personalized Interactive Environments

1-4 July 2024

Cagliari, Sardinia, Italy

Accepted Papers

Full Papers


Characterising Gabs in Personalised Planning Support for Students with Autism

by Robin Cromjongh (Utrecht University); Maria Młocka (Utrecht University); Almila Akdag (Utrecht University); Judith Masthoff (Utrecht University); Hanna Hauptmann (Utrecht University)


Abstract: Students with Autism Spectrum Disorder (ASD) face many challenges that might differ from their neurotypical peers. One area where students with autism face deficits is planning. This paper looks into the challenges faced and the strategies applied for planning by students with autism compared to neurotypical students. We aim to identify where personalisation and adaptivity may help them become independent and effective in their planning behaviour. This research indicates which personalisation needs designers of assistive technologies should consider for planning and task management. We present an online survey with 30 neurotypical students (NTS) and 34 students with (self-)diagnosed autism (ASDS) and interviews with six students with autism for more in-depth insights. Results indicate that ASDS experience problems with gaining a clear overview of what they need to do, knowing when they might do it, and following that plan to execution. In contrast to NTS, they also struggle with fitting routine household and self-care tasks into their schedule. We identified time-independent planning, identifying external pressure and defining sub-tasks as promising planning strategies for students with autism when combined with adaptation to their personal needs.

Exploring Personalized Social Comparison in Online Education

by Kamil Akhuseyinoglu (University of Pittsburgh); Emma McDonald  (University of Alberta); Aleksandra Klasnja Milicevic (University of Novi Sad); Carrie Demmans Epp (University of Alberta); Peter Brusilovsky (University of Pittsburgh)


Abstract: Students experience motivational issues during online learning which has led to explorations of how to better support their self-regulated learning. One way to support students use social reference frames or social comparison in student-facing learning analytics dashboards (LADs) and open learner models (OLMs). Usually, the social reference frame communicates class averages. Despite the positive effects of class-average-based social comparison on students' activity levels and learning behaviors, comparison to class average can be misleading for some students and offer an irrelevant reference frame, motivating only low or high performers. Such conflicting findings highlight a need for a investigation of social reference frames that are not based on the ``average'' student. We extend the research on social comparison in education by conducting two complementary classroom studies. The first explores the effects of different fixed social reference frames in a non-mandatory practice system, while the second introduces an adaptive social reference frame that dynamically selects the peers who serve as a comparison group when students are engaged in online programming practice. We reported our analyses from both studies and shared students' subjective evaluations of the system and its adaptive comparison functionality.

Navigating Serendipity – An Experimental User Study On The Interplay of Trust and Serendipity In Recommender Systems

by Irina Nalis (TU Wien); Tobias Sippl (TU Wien); Thomas Elmar Kolb (TU Wien); Julia Neidhardt (TU Wien)


Abstract: Recommender systems are integral to modern daily life, continually evolving to meet user needs. In the pursuit of enhanced user experiences, metrics like serendipity have emerged within the beyond-accuracy paradigm. However, integrating serendipitous recommendations poses multifaceted challenges, requiring a delicate balance between novelty, relevance, and user engagement. This interdisciplinary experimental study addresses these challenges within a book recommender. Investigating the impact of interface design changes on user trust as an antecedent for satisfaction with serendipitous recommendations, we measured trust levels for each recommended item and the recommender system itself. Our findings reveal that while interface enhancements did not significantly increase trust, they notably elevated serendipity ratings for previously unknown books. These results underscore the intricate interplay between technical and psychological factors in the design of recommender systems, emphasizing the importance of human-centered approaches for creating more responsible AI applications. This research contributes to ongoing discussions on user-centric recommendation systems and aligns with broader themes of digital humanism and responsible AI.

Using Large Language Models for Adaptive Dialogue Management in Digital Telephone Assistants

by Hassan Soliman (DFKI GmbH); Milos Kravcik (DFKI GmbH); Nagasandeepa Basvoju (DFKI GmbH); Patrick Jaehnichen (Aaron GmbH)


Abstract: The advent of modern information technology such as Large Language Models (LLMs) allows for massively simplifying and streamlining the communication processes in human-machine interfaces. In the specific domain of healthcare, and for patient practice interaction in particular, user acceptance of automated voice assistants remains a challenge to be solved. We explore approaches to increase user satisfaction by language model based adaptation of user-directed utterances. The presented study considers parameters such as gender, age group and sentiment for adaptation purposes. Different LLMs and open-source tools are evaluated for their effectiveness in this task. The models are compared, performance is assessed based on speed, cost, and quality of the generated text, with the aim of selecting an ideal model for utterance adaptation. We find that a careful selection and task-specific parameter adjustment of language models and, additionally, the collection of human feedback, are paramount to successfully optimizing user satisfaction in conversational artificial intelligence systems.

Short Papers


Harmonizing Ethical Principles: Feedback Generation Approaches in Modeling Human Factors for Assisted Psychomotor Systems

by Miguel Portaz (UNED); Angeles Manjarrés (UNED); Olga C. Santos (UNED)


Abstract: As the demand for personalized and adaptive learning experiences increase, there is a urgent need for providing effective feedback mechanisms within critical systems, such as in psychomotor learning systems. This proposal introduces an approach for the integration of retrieval-augmented generation tools to provide comprehensive and insightful feedback to users. By combining the strengths of retrieval-based techniques and generative models, these tools offer the potential to enhance learning outcomes by delivering tailored feedback that is both informative and engaging. The proposal also emphasises the importance of incorporating explainability and transparency concepts. Following the hybrid intelligence paradigm it is possible to ensure that the feedback provided by these tools is not only accurate but also understandable to humans. This approach fosters trust and promotes a deeper understanding of the psychomotor learning process, empowering users and facilitators to make informed decisions about the psychomotor learning path. The hybrid intelligence paradigm, which combines the strengths of both human and artificial intelligence, plays a crucial role in the deployment of these solutions. By taking advantage of the cognitive capabilities of human experts alongside the computational power of artificial intelligence algorithms, it is possible to offer personalised feedback that takes into account both technical accuracy and pedagogical effectiveness. Through these collaborative efforts it is also possible to create learning environments that are inclusive, adaptable, and beneficial to lifelong learning.

Towards Integrating Human-in-the-loop Control in Proactive Intelligent Personalised Agents

by Awais Akbar (Trinity College Dublin); Owen Conlan (Trinity College Dublin) 


Abstract: This research explores the integration of Human-in-the-Loop (HITL) control within Proactive Intelligent Personalised Agents (PIPAs) that possess the capability to proactively anticipate users’ needs and perform tasks on their behalf. The proactive assistance offered by PIPAs is tailored to individual users’ preferences and behaviours. However, it’s crucial to personalise the level of proactivity exhibited by PIPAs to align with users’ preferences regarding the potential delegation of autonomy. This necessitates HITL control to regulate PIPAs’ autonomy levels, ensuring appropriate user involvement in decision-making. Through a simulation-based approach, this research explores the integration of HITL control in PIPAs. It investigates the conditions that trigger HITL control, its mechanisms, and the challenges associated with these triggers.