ASIMOV 2024
Workshop on Adaptive Social Interaction based on
user’s Mental mOdels and behaVior in HRI
The 16th International Conference on Social Robotics, 23-26 October 2024, Odense (DK).
user’s Mental mOdels and behaVior in HRI
The 16th International Conference on Social Robotics, 23-26 October 2024, Odense (DK).
Link to our Virtual Room
Meeting link:
https://vu-live.zoom.us/j/98799140191?pwd=yF1gOFKGGfEhP6qeZL9mKRj0MerBsq.1
Meeting number: 987 9914 0191
Meeting password: 732732
ASIMOV - Adaptive Social Interaction based on user’s Mental mOdels and behaVior in HRI -
will be held on the 23th of October in conjunction with the 16th International Conference on Social Robotics
23-26 October 2024, Odense (DK).
The ability to understand and adapt to people’s mental models is a key objective for enabling natural, efficient, and successful human-robot interaction (HRI), in particular in human-centered scenarios where robots are expected to meet people’s social conventions. Theory of mind and mental models are largely investigated in human-computer interactions, however, it is still unclear what level of others’ mental states a robot should be aware of in order to communicate with people in a transparent and socially acceptable way. The ASIMOV workshop will constitute a unique opportunity to gather roboticists and computer scientists to discuss a variety of current and new approaches aiming at endowing social robots with learning abilities, enhancing cognitive and social abilities based on mutual understanding (i.e. shared Theory of Mind).
Despite the promise of social robots in various domains, users often remain cautious about employing them due to ethical, psychological, and safety concerns. Addressing these acceptability challenges requires considering users' psychological and behavioral in the design of social interaction. Therefore, endowing robots with learning and online adaptation abilities is essential for improving human-robot interaction (HRI), especially in assistive, rehabilitation, and educational contexts. Understanding and adapting to users' mental states can help address mismatches between expectations from the robots and their actual capabilities, thereby enhancing interaction efficiency. Such mismatches can lead to ambiguous perceptions and misinterpretations of robot actions, negatively impacting interactions. Recent research shows that robots' acceptability increases when they can understand and meet people's expectations during HRI. By equipping robots with basic socio-cognitive skills, they can convey contextually appropriate affective and social signals in an intelligent and readable way From the mutual comprehension of mental states, an effective HRI can emerge, suspending the disbelief of human partners, and allowing trust, partnership, and acceptability.
This workshop aims to unite theories and practices that enhance social cognition and user awareness in HRI, particularly for socially assistive robots in education, entertainment, and healthcare, where user acceptability is critical. Topics will be explored from a multidisciplinary perspective, inviting experts in human-agent interaction, social robotics, cognitive sciences, artificial intelligence, psychology, neuroscience, and philosophy. Special focus will be given to the state-of-the-art user modeling methods assessing overt (e.g., behavior and speech) and covert information (e.g., cognitive states and emotional reactions) using tools like motion capture, eye-tracking, and biosignals.
We aim to bring together a diverse audience from the fields of social and assistive robotics, cognitive and behavioral robotics, and HRI. The workshop will serve as a platform for exchanging ideas, discussing innovative concepts, and addressing unresolved issues in ongoing research. We encourage the participation of PhD students and young researchers working on user modeling, HRI and control interfaces, machine learning, and ethical aspects of human-machine interaction, among others.
Topics of interest include, but are not limited to, the following:
Mental models in HRI
Human-aware perception-action loop
Emotion and intention recognition
Affective and cognitive sciences for socially interactive robots
Empathy and Theory of Mind in Robotics
Mutual affective understanding
Real-time monitoring of behavior and mental states
Detection of non-verbal behavioral cues
Online adaptive behavior
Multimodality in human-robot interaction
Acceptability and personalization
Physiological monitoring and biofeedback systems
BCI (brain-computer interfaces)-enabled adaptive interaction
Short- and Long-term personalization
Human partnership and trust in HRI
Explainable AI in HRI
Security and safety in HRI
Prof. Raphaëlle Roy, Université de Toulouse, France, has been confirmed as an invited speaker
Prof. Brian Scassellati, Yale University, USA, has been confirmed as an invited speaker