Introduction: We are interested in examining the human emotional responses to our Funnel Bot and its lighting system before, during, and after engaging with it. We will also be comparing that to changing opinions after playing catch with a person in the same manner, both with and without signaling. Since we have no intention of attempting to make our robot resemble a human or have any anthropomorphic features or functionalities, we want to observe how engaging in catch will affect a human’s perception of the robot, and how the presence or absence of signaling can influence overall perception. We are curious to examine how this perception can or will change because of engaging with it, and experiencing it firsthand, as opposed to merely knowing what its function is. Prior anticipation can play a significant role in overall interaction, and we want to know how signaling (both human and robot) can improve or degrade perception when combined with these prior anticipations. We would like to gauge the level of trust and confidence they feel regarding collaboration they may ascribe to a non-anthropomorphic robot. We will measure this change in trust before and after engaging in catch, both with LED lights to signal intent and without, and with both a non-responsive and a normal human interaction.
Related Work:
1) Playing Games with Robots – A Method for Evaluating Human-Robot Interaction: This paper discusses how robots that comply with human behavioral patterns are easier for humans to relate to, and ultimately collaborate with. It also discusses the usefulness of games and how they are played as testing resources for HRI experiments [1].
2) Evaluating Fluency in Human-Robot Collaboration: This paper is primarily concerned with looking at fluency and elaborating on related terms to provide better metrics for evaluating cooperative HRI. It suggests several key points for evaluating the fluency of these interactions, and how to objectively analyze them [2] .
3) Robot Catching: Towards Engaging Human-Humanoid Interaction: This third paper discusses using a high DOF humanoid robot to play a game of catch, and the steps they took to enable such behavior. They follow with prior research that indicates humanlike behavior encourages cooperation, so a humanlike robot could play catch effectively [3].
From our initial perusal of these previous works and others, there seems to be a definite interest in creating more human-like or relatable robots and then evaluating how these tactile/physical traits, coupled with behavior, can encourage cooperation. Paper 1 focuses on how human-like robots are easier to relate to, but not how a robot that is very inhuman but can accomplish a task can change perceptions. Paper 3 takes this a step further, suggesting a human-like robot may be best suited to play catch, since it appeals to human interests. This also ignores the fluency of the interaction, as detailed in paper 2; our catching robot may be simpler to play with than a high degree-of-freedom robot, and with a less relatable appearance, it may achieve improved trust and collaborative abilities through ease of play over anthropomorphism. We believe that interaction fluency with an inhuman robot may be as important as evoking human attributes in building trust and preferences. We believe a solution to examining this would be to have participants play a game of catch with a human actor and toss the ball back and forth without any additional engagement (lack of facial expression, no cues, waiting to throw without providing signaling). Evaluating this interaction would likely have some negativity associated with it, since the human is expected to engage more. With a basic, non-anthropomorphized robot, playing catch would focus on the experience of catch itself, and may yield a different outcome, since expectations are vastly different than with a human partner. In both cases, the change in trust of the interaction may be focused on expectations, with an inhuman robot exceeding expectation, and a human falling short of anticipated responses.
Methods, Results and Discussion: Nick did the following for the experiments: he provided five surveys, one to be taken before, and one after each stage in the experiment to track shifting perspectives and feelings. There were four stages in our experiment: catch with an unresponsive funnel, catch with a funnel that signals intent via LED lighting colors, catch with an unresponsive person, and catch with a normally responsive person. For the robot, since our throwing mechanism is still unreliable, the ball was routed behind a screen and thrown back to the participant after a random amount of time. For the second stage, the LED lights would glow rainbow colors while waiting to catch a ball, glow bright red after catching the ball, and then flash green immediately before throwing the ball back. The unresponsive human interaction consisted of normal catch, except the actor would wait to throw the ball, arm poised, withholding any signaling of intent or timing. The last stage was a normal game of catch. Every stage consisted of 6 back and forth tosses and occurred in the same order. Ten random participants were assessed, who were all robotics grad students.
The following charts contain the results of the survey questions on several subjects and track the average responses across the experiments. All the following questions requested feedback on a five-point scale, with one being low or negative, and five being high or positive.
These results indicate a few interesting points. First, that the average trust of robots seemingly went unchanged on average, with the general uncertainty surrounding that average converging slightly throughout the experiment. This would seem to indicate that the addition of signaling to the FunnelBot had little to no effect on overall trust of robots. Confidence had more interesting trends, with a dip occurring after the first round in the average confidence levels, returning to a slightly improved and rising level of confidence following. This would seem to indicate that after the first interaction without signaling, on average both the certainty and confidence in ability dropped but was restored and then enhanced after the introduction of signaling and then catch with a human partner. A substantial addition to the confidence level may have been practice, as the participant continued to throw the ball they became more familiar with how to toss it. Based on this data however, the greatest positive increase occurred after the addition of LED signaling, indicating a possible connection between greater levels of confidence with the inclusion of signaling (true for both round 2 and 4, robot and human).
The average trust of strangers was somewhat unchanging, with a slight inclination towards the positive. This may have been correlated to the addition of signaling as well, and how this put at ease some of the discomfort felt initially when signaling was withheld by a person. The rated importance of communication demonstrated a clear upwards trend throughout the survey, indicating the participants recognized that the addition of signaling enhanced both kinds of encounters.
Before and after, the questionnaire asked about overall feelings towards robots, strangers, and playing catch. For robots, participants shifted from a rating of 3.8 ± 0.98 to 4.2 ± 0.87. This increase in certainty and overall score indicate that this survey, at the very least, appears to have improved perceptions about robots. This improvement could stem from general enjoyment of the session, or from an appreciation for how the robot may have exceeded expectations. For strangers, the participants shifted from a rating of 2.6 ± 0.8 to 2.4 ± 0.8. This very small decrease could just be an artifact, or it could be the result of negative feelings experienced when expected signaling is withheld. When asked about playing catch, overall scored feelings increased from 3.4 ± 1.28 to 3.7 ± 0.9.
In the above figure, the catch and throw success rates for the participants are shown for after each of the four stages. On an interesting note, the worst average score and highest variability was observed during the stage with light signaling, even though it appears to be correlated to increased confidence. This could indicate that although the lights were appreciated, they may have served as a distraction and potentially reduced the accuracy of the participants. Concerning the third and fourth rounds, the addition of signaling and normal interaction in a human situation seems to have improved the success rate by ~15%.
Conclusion and Future Work: We previously identified that the key metric we might examine is trust and confidence with a foreign entity before and after cooperating. This might highlight how expectations play a role in HRI, and what sort of impact a simple robot that makes no attempt at manipulating the emotional state of the interaction may have on individuals, compared to a human providing the same functionality. The results of this study seem to indicate that the addition of signaling seem to have little to no impact on overall trust of either robots or strangers. However, confidence seemed to significantly rise, as did a perception of the importance of communication. In summary, the addition or removal of signaling may not impact trust, but it can enhance confidence in the encounter.
If done again, we might consider randomizing the order of the stages, to better control for the effects of rising confidence. This would allow a better study of feelings after each stage, and less as a product of cumulatively improving. Additionally, we would want to clarify some of the questions and their meaning, such as trust with strangers. We suspect that the evaluation questions regarding strangers, which showed minor change throughout this study, felt less related to the experience. We may choose different wording, such as “trust with someone who is unresponsive” or “someone who is not expressive” instead of the vague and broad category of “strangers."
References:
[1] M. Xin and E. Sharlin, Playing Games with Robots - A Method for Evaluating Human-Robot Interaction. 2007.
[2] G. Hoffman, “Evaluating Fluency in Human-Robot Collaboration.”
[3] M. Riley and C. G. Atkeson, “Robot Catching: Towards Engaging Human-Humanoid Interaction,” Auton. Robots, vol. 12, no. 1, pp. 119–128, 2002.