Interactive Perception-Action-Learning for Modelling Objects

The Interactive Perception-Action-Learning for Modelling Objects (IPALM) is a collaborative research project develops methods for the automatic digitisation of objects and their physical properties by exploratory manipulations. These methods will be used to build a large collection of object models required for realistic grasping and manipulation experiments in robotics. These methods will be used to build a large collection of object models required for realistic grasping and manipulation experiments in robotics.

Vision and language resources will provide priors and category level models for object recognition and manipulation in instance modelling based on a perception-action learning loop. New knowledge from instances will then be used to refine category-level models. Our grasping apparatus include Baxter and Barret WAM, which have two-fingered grippers; iCub and Barrett have multifingered hands with tactile sensors.


IPALM is a collaborative research between five academic institutions: Imperial College London (ICL, UK), École des Ponts ParisTech (ENPC , France), Institute de Robòtica i Informàtica Industrial (IRI, Spain), Aalto University (AAL, Finland), and Czech Technical University (CTU, Czech Republic).