Service robots will be co-located with human users in an unstructured human-centered environment and will benefit from understanding the user's daily activities, preferences, and needs towards fully assisting them. In doing so, objects in the user’s environment can provide useful context in understanding and grounding information regarding the user's instructions, preferences, habits, and needs.

Semantics of objects and scenes have traditionally been investigated for robotics in the perception, navigation and manipulation domain, but recent works have shown its benefits in an HRI context towards understanding and assisting human users. This workshop aims to explore ways to learn and ground abstract semantics of the physical world towards assistive autonomy in areas such as

  • Unobtrusive learning from observations

  • Preference learning from contextual observations

  • Predicting and reasoning over physical effects of human actions

  • Anticipating human intent by observing their interaction with the environment

  • Continual learning and adaptation to user needs

  • Enhancing transparency of autonomous robot behavior