Publications



 
 Extracting Bimanual Synergies with Reinforcement Learning 
 Kevin Sebastian Luck and Heni Ben Amor 
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) Proceedings (to be appear)
Motor synergies are an important concept in human motor control. Through the co-activation of multiple muscles, complex motion involving many degrees-of-freedom can be generated. However, leveraging this concept in robotics typically entails using human data that may be incompatible for the kinematics of the robot. In this paper, our goal is to enable a robot to identify synergies for low-dimensional control using trial-and-error only. We discuss how synergies can be learned through latent space policy search and introduce an extension of the algorithm for the re-use of previously learned synergies for exploration. The application of the algorithm on a bimanual manipulation task for the Baxter robot shows that performance can be increased by reusing learned synergies intra-task when learning to lift objects. But the reuse of synergies between two tasks with different objects did not lead to a significant improvement.
   From the Lab to the Desert: Fast Prototyping and Learning of Robot Locomotion
 K. S. Luck, J. Campbell, A. Jansen, D. M. Aukes & H. Ben Amor
Robotics:Science&Systems 2017 Proceedings
In this paper, we discuss a methodology for fast prototyping of morphologies and controllers for robot locomotion. Going beyond simulation-based approaches, we argue that the form and function of a robot, as well as their interplay with real-world environmental conditions are critical. Hence, fast design and learning cycles are necessary to adapt robot shape
and behavior to their environment. To this end, we present a combination of laminate robot manufacturing and sample-efficient reinforcement learning. We leverage this methodology to conduct an extensive robot learning experiment. Inspired by locomotion in sea turtles, we  design a low-cost crawler robot with variable, interchangeable fins. Learning is performed with different bio-inspired and original fin designs in both an indoor, artificial environment, as well as a natural environment in the Arizona desert.
 
Bio-inspired Robot Design Considering Load-bearing and Kinematic Ontogeny of Chelonioidea Sea Turtles
A. Jansen, K. S. Luck, J. Campbell, H. Ben Amor & D. M. Aukes
Living Machines 2017 Proceedings
This work explores the physical implications of variation in fin shape and orientation that correspond to ontogenetic changes observed in sea turtles. Through the development of a bio-inspired robotic platform – CTurtle – we show that 1) these ontogenetic changes apparently occupy stable extrema for either load-bearing or high-velocity movement, and 2) mimicry of these variations in a robotic system confer greater load-bearing capacity and energy efficiency, at the expense of velocity (or vice-versa)
 
Sparse Latent Space Policy Search
K. S. Luck, J. Pajarinen, E. Berger, V. Kyrki & H. B. Amor
AAAI 2016 Proceedings
Sparse Latent Space Policy Search is a novel algorithm combining Group Factor Analysis and Reinforcement Learning into one framework. Incorporating prior structural information like groups of joints of legs and arms lead to specialized principal components.
  Latent space policy search for robotics
K. S. Luck, G. Neumann, E. Berger, J. Peters & H. B. Amor
IROS 2014 Proceedings
The novel algorithm 'Policy Search with Probabilistic Principle Component Exploration' (PePPEr) combines dimensionality reduction and reinforcement learning into one unifying framework. Instead of applying dimensionality reduction before learning or as an extra method the policy search method itself incorporates the ability to uncover a latent space. Experiments were performed both on an artificial task as well as on a simulated NAO robot with the goal to learn to lift the left leg without falling.