Exploratory research will be deployed in the context of this project, with the focus being on exploring the integration of social robots in university receptions, whether supporting existing reception desks or functioning independently in locations without a physical one. The research will utilize the GreetingBot Mini developed by OrionStar Robotics adopts a holistic and user-centered approach, from initial requirements gathering to final user testing, ensuring the development of a reception robot that is both properly functional and well-accepted by the university community. The evaluation will result in the production of design recommendations for robots in hospitality contexts.
The increasingly autonomous and independent role of robot partners in human-robot collaborations results in a gradual ambiguity of who is responsible for success or failure. Understanding how robots can repair trust is crucial given that collaborative failures or social mishaps are inevitable in HRI. The way in which a robot responds in such situations significantly impacts how people evaluate their robot partner and their trust relationship. Yet, little research has been conducted on how robots should communicate information that may harm or repair the trust relationship in human-robot collaborations. This research project is a collaboration with the Dept. of Psychology and aims to design appropriate communication strategies for robots that negotiate potentially negative collaborative outcomes without compromising the trust relationship with their human partners. This research is funded by the Human-Centered AI Focus Area of Utrecht University.
As pending integration of robots into society is gaining prominence, ethical questions such as how humans should design, deploy, and treat robots should be better examined. Specifically, with robots entering human communities, they become part of social structures known as norms: the social, moral, and legal rules of how to (not) behave in specific contexts. Initial findings shows that people, under some circumstances, blame or punish robots for their (in)actions in normative situations. However, specifics on which robot behaviors are perceived as morally sensitive or how people morally judge robots for such norm violating behavior is currently unknown. This research has been funded by the HUMAN-AI Alliance, EWUU program between TU/e, UU, WUR, and UMCU, and has won the KROS Interdisciplinary Research Award in Social Human-Robot Interaction at the IEEE RO-MAN conference 2024.
This research project that concerns the application of a social robot in child healthcare settings in collaboration with The Center for Youth and Family (CJG Capelle a/d IJssel). Supporting children with communication and social-emotional difficulties with their psychosocial development places a relatively high burden on parents, teachers, and caregivers. In co-creation with the children and those involved, this project aims to develop a personal buddy robot with whom the children can carry out social-educational activities in an innovative, accessible, and motivating way. With the robot buddy within a blended care solution, we offer an extra dimension to existing active programs to meet current needs (e.g., supporting children to better connect with their social environment, offering professionals better access to children, and involving parents in the care process). The goal is to bridge the current gap of ecological validity and lacking knowledge of long-term effects by co-designing robot application scenarios with children, parents, and healthcare professionals, as well as systematically test these scenarios with longitudinal studies in real-world contexts.
While interacting with robots, people inevitably construct mental models to understand a robot’s actions. However, people build these mental models based on their experience interacting with other living beings. This leads to ambiguous perceptions in robot actions, wrongful accusations of errors, improper trust estimation in robots, and ineffective human-robot collaborations. This research investigates how people’s understanding of robot capacities and intentions increases when robots explain their actions. Such understanding enables accurate error corrections, supports blame verification, fosters appropriate trust assessment, and sustains effective human-robot collaborations.
People may mistreat and bully robots, especially when these are left relatively unattended. Certain behavioral cues executed by the robot seem to encourage bystanders to interfere in cases of mistreatment. Additionally, people attribute gender to humanlike robots and attach them qualities associated with the perceived gender. Psychology research reveals that women (compared to men) are more likely to be mistreated. This interaction effect between gender and mistreatment needs further research in the context of human-robot interaction (HRI). This collaborative project with TU Eindhoven investigates the effects of perceived robot gender on its likelihood to be mistreated by a layperson, and assess people’s experience and reactions to humanoid robots’ mistreatment.
This research uses the study of behavior explanations —a core component of human social cognition— as a novel technique for examining people’s readiness to infer mental states from robot behavior and to manage a robot’s social standing. The proposed studies build on the folk’s theory of behavior explanations, which subtly reveals people’s inferences of mental states and examines the implicit attitudes people have towards robots. Using my expertise in HRI, including survey, laboratory, and rarely used longitudinal methods, I will examine how people explain robots’ behaviors and what such explanations reveal about the cognitive and social underpinnings of human-robot relations. The results will contribute to technology design and policy direction of explainable and transparent robot behavior. This project was sponsored by the Netherlands Organization for Scientific Research (NWO) Rubicon grant 446-16-007.
After the merger of the faculty of Behavioral Sciences with the faculty of Management and Public Administration, the University of Twente initiated the Tech4People initiative to stimulate collaborations between the existing departments. My postdoc proposal on the ethics of human-robot relationships was one of those that were granted with a one-year postdoc position. The project called ‘Human-Robot Relationships and the Good Life’ aimed to investigate whether and how the relationships some users are willing to establish with social robots may contribute to the psychological well-being of those users.