Mission
HRI can be implemented in different ways—either the human and the robot can collaborate in close contact (eg, coordinated lifting tasks), human-robot cooperation tasks (eg, human and the robot work alternately on different tasks within a process without direct interaction but share the same objective and workspace ), or the human and the robot can interact remotely (eg, teleoperation of a robotic system with the help of computer applications such as virtual reality), where the robot assists the human in tasks deemed too dangerous for direct human involvement or in tasks in hard-to-reach places or hostile environments. Both types of HRI require the human agent and the robotic system to adapt to each and the interaction environment.
Recent cognitive science and computational modeling works can inform adaptive HRI for robotics. For example, in an HRI task, given observed human behaviors, with considerations for the human's cognitive bounds and the task's environmental bounds, these computational models can help the robot infer the human's goals, intentions, or even the subjective utility functions. In addition, these models can also help the robot predict human decisions and behaviors given the inferred goals. Such considerations of the human agent will be more likely to reduce the inconvenience, threat, annoyance, or harm to human users and provide further accessibility, functionality, and protection instead.
Given this background and motivation, our workshop focuses on three main questions:
How can human-centered design improve human-robot interactions?
How can cognitive models help develop robotic strategies?
Why is adaptive human-robot learning important, and how can we model adaptive human-robot interaction?