ASIMOV 2023
Workshop on Adaptive Social Interaction based on
user’s Mental mOdels and behaVior in HRI
The 15th International Conference on Social Robotics, 3-7 December 2023, Doha (Qatar).
Link to our Virtual Room
Meeting link:
https://qu.webex.com/qu/j.php?MTID=md234a32c4da9927a5335d210340fdd0c
Meeting number:
2373 425 6270
Meeting password:
xsJkPiJt537
Join from a video or application
Dial 23734256270@qu.webex.com
You can also dial 62.109.219.4 and enter your meeting number.
ASIMOV - Adaptive Social Interaction based on user’s Mental mOdels and behaVior in HRI -
will be held on the 4th of December in conjunction with the 15th International Conference on Social Robotics
03-07 December 2023, Doha (Qatar).
abstract
The ability to understand and adapt to people’s mental models is a key objective for enabling natural, efficient, and successful human-robot interaction (HRI), in particular in human-centered scenarios where robots are expected to meet people’s social conventions. Theory of mind and mental models are largely investigated in human-computer interactions, however, it is still unclear what level of others’ mental states a robot should be aware of in order to communicate with people in a transparent and socially acceptable way. The ASIMOV workshop will constitute a unique opportunity to gather roboticists and computer scientists to discuss a variety of current and new approaches aiming at endowing social robots with learning abilities, enhancing cognitive and social abilities based on mutual understanding (i.e. shared Theory of Mind).
SCIENTIFIC CONTEXT
Endowing robots with learning and online adaptation abilities is a key objective for enabling natural and efficient human-robot interaction (HRI), especially in the areas of assistive, rehabilitation, and educational robotics. As Isaac Asimov, the famous Russian-American science-fiction writer, pointed out in one of his novels “The Complete Robot (1982)”, it is fundamental for humans and robots to adapt to each other in order to have a successful and efficient interaction. To this extent, one of the critical challenges in human-robot interaction is to design robots with learning abilities that would enable them to behave according to the following three criteria: efficiency, acceptability, and security. In order for robots to fulfill these criteria, it is necessary for the robot and the human to understand each other’s intentions, beliefs, and desires. The ability to interpret and adapt to users’ behavior and mental states could help in solving the mismatch existing between expectations of robots (often elicited by the robot’s appearance) and their actual capabilities and, therefore, enhance efficiency. Indeed, a possible mismatch can lead to ambiguous perceptions and improper interpretation of the robot’s actions and intentions by negatively affecting the interactions. Therefore, designing human-aware social interaction paradigms allows robots to automatically detect and correct inaccurate mental states held by users through adaptive behavior. Additionally, recent research evidence shows that robots’ acceptability increases when the robot is able to understand and meet people’s expectations (i.e., their mental models) during HRI. By giving basic socio-cognitive skills to robots, they can show contextually appropriate affective and social signals in an intelligent and readable way. From the mutual comprehension of mental states, an effective human-robot interaction can emerge, suspending the disbelief of human partners, and allowing trust, partnership, and acceptability. Despite the recognized potential and usefulness of social robots for assistive, educational, and entertainment purposes, people are still wary of interacting with them. In particular, their hesitations are connected to different aspects of HRI, including ethical and psychological concerns, and physical and security safety, such as privacy violations, physical harm, etc. In this direction, it is essential to take into consideration the psychological and behavioral responses of people who are sharing the environments with robots and design social robots’ behaviors that can proactively plan, manage, and execute its goals, and ease the interaction at the same time. In tackling the challenges mentioned above, this workshop aims to bring together theories and practices that advance social cognition and user awareness in HRI to enrich the mutual understanding between humans and robots. This is especially desirable for socially assistive robots in the context of education, entertainment, and especially in healthcare, where the target user groups often include vulnerable people (e.g., elderly people or children with diseases compromising attentional or emotional responses) and acceptability of the robots is of paramount importance.
OBJECTIVES
The workshop’s topics will be approached from a multidisciplinary perspective by inviting speakers with various expertise, including human-agent interaction, social and assistive robotics, cognitive and behavioral sciences, artificial intelligence, psychology, neuroscience, and philosophy of mind. Specific attention will be given to the state-of-the-art methods in user modeling through evaluation of overt (e.g. behavior and speech) and covert information (e.g. cognitive states and emotional reactions) using tools such as motion capture, eye-tracking, biosignals, etc.
Target Audience
This workshop is intended as a forum for a broad audience, which spans social and assistive robotics, cognitive and behavioral robotics, and social awareness and explainability in HRI. The workshop should be a place to exchange opinions, discuss innovative ideas, and get hints and suggestions on ongoing research, therefore contributing to tackling unresolved issues. The proposed topic brings together researchers working on user behavior and intention detection, human-robot interaction, social and assistive robotics, control interfaces, learning, and ethical and safety issues in human-machine interaction, among others. A large scientific community is involved in such research fields.
LIST OF TOPICS
Topics of interest include, but are not limited to, the following:
Mental models in HRI
Human-aware perception-action loop
Emotion and intention recognition
Empathy and Theory of Mind in Robotics
Mutual affective understanding
Real-time monitoring of behavior and mental states
Detection of non-verbal behavioral cues
Online adaptive behavior
Acceptability and personalization
Physiological monitoring and biofeedback systems
BCI (brain-computer interfaces)-enabled adaptive interaction
Short- and Long-term personalization
Human partnership and trust in HRI
Explainable AI in HRI
Security and safety in HRI
news
Prof. Agnieszka Wykowska, Italian Institute of Technology (Genoa, Italy), has been confirmed as invited speaker with the talk: Humans and robots in the context of sharing a task–joint action and sense of joint agency.
Prof. Alessandro Di Nuovo, Sheffield Hallam University (UK), has been confirmed as an invited speaker with the talk: How can adaptive social robots help develop human-centred services to benefit society?
Dr. Salvatore Anzalone, University of Paris 8 (Paris), has been confirmed as an invited speaker. The title of the talk: Social Artificial Agents for children with Neurodevelopmental Deficits
The Workshop will be held on 4th December 2023 from 14:00 to 17:00 UCT+3.
Visit the main conference Website for additional details on the venue (https://icsr23.qa/venue/) and registration (https://icsr23.qa/registration/).
Enter our Virtual Room at this link: https://qu.webex.com/qu/j.php?MTID=md234a32c4da9927a5335d210340fdd0c - Meeting number: 2373 425 6270 - Meeting password: xsJkPiJt537