The ICSR 2015 Workshop "Evaluation Methods Standardization in Human-Robot Interaction" is a workshop held in conjunction with the 7th Seventh International Conference on Social Robotics, in Paris (France) on October 26th, 2015.
This workshop aims to bring together people from different research fields in order to share their experiences and their methodologies about Human-Robot Interaction evaluations.
When making a social robot, il it very important to take human into account before designing the robot and also after to evaluate its impact on people. It is now well known that "human is in the loop!". But computer scientists or roboticists are not expert to evaluate such interactions and effects. That is why they need to be associated to psychologists, ethologists, sociologists, philosophers, anthropologists (...) who are specialists in analyzing humans' behaviors and attitudes. These disciplines use different techniques, more or less adapted to Human-Robot Interaction studies. For example, psychologists make evaluations in controlled environement, which requires evaluation Human-Robot Interaction in laboratory, in specific rooms. Even if these kinds of evaluation bring knowledge, they do not helps evaluating Human-Robot Interaction in real life (real context).
This workshop aims at exploring existing evaluation methods, to share knowledge about good and bad practices that could be applied to HRI, to elaborate guidelines for HRI evaluations, and to discuss about common standards.
Thus, the objective of this first workshop is to answer the following questions:
which methodology from Human-Human Interaction and Human-Animal Interaction can be applicable to Human-Robot Interaction ?
which are good or bad practices? Which common mistakes or biases should be avoid when designing an evaluation, whatever the partners studied?