Second International Workshop on Evaluation Methods Standardization in Human-Robot Interaction

Saturday, August 27th

To be held in conjunction with the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016)

August 26-31 , New York City, USA

Objective: to understand how to design evaluations to avoid biases and to insure valid results. We need specialists in designing evaluations.

The use of robots is becoming increasingly prevalent in our society, especially by assisting people in their daily tasks. Robots can have several roles such as home care robots (e.g. for seniors), mediators (e.g. for person with autism), and companions (e.g. for children alone at home). When a new application/behavior is created on a robot, researchers need to validate it. Obviously, they need to validate technical aspects: has the robot correctly executed its tasks, has it correctly moved its actuators… But, they also need to validate psychological aspects, because humans have a tendency to anthropomorphize robots, and can reject a robot which does not respect particular social norms, for example. Evaluating an application on a robot is becoming complex, because the need to understand how humans experience the interaction is not easily met with our current methodologies. One common objective in HRI is to “maximize” well-being, thus we need to understand which social skills are important, what the impact is of HRI, which roles the robot can and cannot fulfil and so on. Our interest is about the psychological part.

People who create robot applications are computer scientists or roboticists. They often are not experts in evaluating human-robot interactions and their effects. As such input from psychologists, ethologists, sociologists, philosophers, anthropologists, and so on, who are specialists in analyzing human (behaviors, attitudes, communication, ...), is invaluable. These disciplines use different techniques, but all are to a large extent readily available to Human-Robot Interaction studies. For example, psychologists evaluate in controlled environments, which requires the study of Human-Robot Interaction in a laboratory setting. Even if these kinds of evaluation bring knowledge, they do not help evaluating Human-Robot Interaction in the wild. Worse than that, the existing literature is full of articles presenting studies performed without specialists, which not seldomly contain some methodological errors or biases. Therefore, we believe it becomes necessary to standardize Human-Robot Interaction evaluation methods.

This workshop is a follow-up of the first workshop which was held at the Interaction Conference on Social Robotics in 2015 (EMSHRI 2015). During this first event, several evaluation methods were theoretically presented, as well as good and bad practices when designing an evaluation. In this second workshop, we would like to further this knowledge, and we invite people from different research fields to share their experiences about evaluating the relation between humans and robots. This workshop aims at exploring methods which were used in existing studies in order to know which methods fit to which scientific questions. It also aims at completing knowledge about good and bad practices, and at elaborating recommendations and guidelines in collaboration with participants of the first workshop, through the publication of an international book.

Past event : EMSHRI 2015