CHIST-ERA project "Interactive Grounded Language Understanding" (IGLU). The IGLU project website is here.

Baxter robot used to collect the data and samples from the dataset recordings.

Multimodal Human-Robot Interaction (MHRI) dataset

Authors and collaborators:

Pablo Azagra, Ana C Murillo, Javier Civera. Universidad de Zaragoza.

Yoan Mollard, Florian Golemo, Manuel Lopes. INRIA, Bourdeaux.

Brief Description of the MHRI dataset:

The dataset includes recordings from 10 different users teaching the robot different common kitchen objects, that consists of synchronized recordings from three cameras and a microphone mounted on the robot:

  • An RGB-d camera covers the user manipulation and interaction with the robot
  • An RGB-d camera mounted on the top of the robot provides a top view of the whole scenario
  • A HD-RGB camera points to the user head to capture face and expressions

To browse, download or get more info on the dataset please go here.

Publications:

Pablo Azagra, Javier Civera and Ana C. Murillo. Finding Regions of Interest from Multimodal Human-Robot Interactions. In Proc. of 2017 International Workshop on Grounding Language Understanding (GLU), pp. 73-77, together with Interspeech, Stockholm 2017. (paper)

Pablo Azagra, Florian Golemo, Yoan Mollard, Ana C. Murillo, Javier Civera. A Multimodal Dataset for Object Learning from Natural Human-Robot Interaction. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS17), Vancouver, Canada, 2017. (paper) (video) (dataset)

Pablo Azagra, Yoan Mollard, Florian Golemo, Ana Cristina Murillo, Manuel Lopes, Javier Civera. A Multimodal Human-Robot Interaction Dataset. Workshop on the Future of Interactive Learning Machines (FILM), together with NIPS 2016 Barcelona, Spain, 2016. (pdf) (poster) (dataset) (video)