Would people follow the gaze of the robot during a handover task?
Would gaze direction affect task performance?
Can we observe eye movement patterns depending on the congruency between the action and the goal?
We adapted the task from previous adapted Perez-Osorio et al. (2017) into an interaction scenario with the robot. In the paradigm, an object was object requested auditorily, "something to drink" or "something to do the laundry"). Then participants were requested to report the letter (V or T) after the robot gaze. The robot looked either at the congruent (e.g., the orange juice when something to drink was requested) or the incongruent object (e.g., the softener when something to drink was requested).
Behavioral responses and eye movements were collected using OpenSesame (Experimental software) and Tobii Pro Glasses (eye movements), respectively. It was challenging for the engineer to integrate and control the experiment, the robot behavior and the eye movement measures.
Human performance was focused on reactions times and accuracy together with eye movement metrics (proportion and duration of fixations) to compare gaze following and congruency of the object
Additionally, we explored the eye movements during the handover, after the response.
We observed the same pattern of results reported previously. Participants followed the iCub's gaze and responded faster to targets located in the same location of the gaze but importantly only when the robot looked at the expected object.
Reaction times in milliseconds showed that people followed the gaze of the robot (faster responses) in the congruent and neutral conditions
Differences between valid and invalid gaze cues reveal the strong effect between conditions
Participants mostly fixated on the face of the robot, as expected. They also switched between the robot's face and the object during the handover.
Participants mostly looked at iCub's face in all conditions
No differences in fixations between the objects in different scenarios
Participants fixations during the handover of the congruent object, heatmaps and sequence.
Reference: J. Perez-Osorio, D. De Tommaso, E. Baykara and A. Wykowska, (2018) Joint Action with Icub: a Successful Adaptation of a Paradigm of Cognitive Neuroscience in HRI. In Proceedings 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2018), pp. 152-157, doi: 10.1109/ROMAN.2018.8525536.