Projects

Autonomy Infused Teleoperation for BCI Applications

Robot teleoperation systems introduce a unique set of challenges including latency, intermittency, and asymmetry in control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and even erratic low-dimensional motion commands due to the difficulty in decoding neural activity. In this project, we created a shared teleoperation framework that addresses the challenges of robot teleoperation with a focus on BCI controlled manipulation. The teleoperation architecture combines computer vision, user intent inference and human-robot arbitration in order to produce supervised autonomous manipulation. Using user intent inference, we avoid explicit user goal selection by reasoning instead over the observed end-effector trajectory in conjunction with a model of the environment. In contrast to other work, we introduced capture envelopes, similar to gravity fields, for smooth and continuous user grasp inference. Adjustable assistance levels realized through human-robot arbitration allowed to blend the task requirements with the operator's capabilities while balancing their comfort and sense of control. A clinical validation of the advanced prosthetics control software showed significant performance improvements on rehabilitation benchmark tasks using the autonomy infused teleoperation framework to assist controlling a robotic arm in conjunction with an intracordical BCI. Generality and extensibility of the architecture was successfully tested on quality-of-life tasks such as opening the door. 

Muelling, K.; Venkatraman, A.; Valois, J-S.; Downey, J.; Weiss, J.; Javdani, S.; Hebert, M.; Schwartz, A.B.; Collinger, J.L.; Bagnell, J.A. (2015). Autonomy Infused Teleoperation with Application to BCI Manipulation, Robotics: Science and Systems (RSS). Best Systems Paper Award. Download Article 

Machine Learning for Complex Motor Skills

Autonomously learning new motor tasks from physical interactions is an important goal for both robotics and machine learning. However, when moving beyond basic skills, most monolithic machine learning approaches fail to scale. For more complex skills, methods that are tailored for the domain of skill learning are needed. In this project, we present a new framework that enables a robot to learn basic cooperative table tennis from demonstration and interaction with a human player. To achieve this goal, we created an initial movement library from kinesthetic teach-in and imitation learning. The movements stored in the movement library can be selected and generalized using the proposed mixture of motor primitives algorithm. As a result, we obtain a task policy that is composed of several motor primitives weighted by their ability to generate successful movements in the given task context. These weights are computed by a gating network and can be updated autonomously.


Muelling, K.; Kober, J.; Kroemer, O.; Peters, J. (2013). Learning to Select and Generalize Striking Movements in Robot Table Tennis, International Journal of Robotics Research, 32, 3, pp.263-279.   Download Article   BibTeX Reference

Navigation in human environments

Path planning in presence of dynamic obstacles is challenging due to the added time dimension in the search space and the social interactions that can occur. For a successful navigation among humans and other agents, the system needs to be able to infer the intent and possible pathways of other agents, account for the involved uncertainty, and model the joint collision behavior between itself and the other agents. This project aims on addressing the challenges that occur when navigating in a dynamic and sometimes even crowded environment.

Vemula, A.; Muelling, K.; Oh, J. (2017). Modeling Cooperative Navigation in Dense Human Crowds. Proceedings of the International Conference on Robotics and Autonomy (ICRA).

Vemula, A.; Muelling, K.; Oh, J. (2016). Path Planning in Dynamic Environments with Adaptive Dimensionality, Proceedings of the Ninth International Symposium on Combinatorial Search (SoCS-2016). Download Article

Extracting strategic information from Motor Games

Learning a complex task such as table tennis is a challenging problem for both robots and humans. Even after acquiring the necessary motor skills, a strategy is needed to choose where and how to return the ball to the opponent’s court in order to win the game. The goal of this project was to develop a Markov Decision Process (MDP) framework for table tennis, where the reward function models the goal of the task as well as the strategic informati
on. We showed how this reward function can be discovered from demonstrations of table tennis matches using model-free inverse reinforcement learning. The resulting framework allowed us to identify basic elements on which the selection of striking movements is based. The approach was tested on data collected from players with different playing styles and under different playing conditions. The estimated reward function was able to capture expert-specific strategic information that sufficed to distinguish the expert among players with different skill levels as well as different playing styles.

Muelling, K.; Boularias, A.; Mohler, B.; Schoelkopf, B.; Peters, J. (2014). Learning Strategies in Table Tennis using Inverse Reinforcement Learning, Biological Cybernetics, 108, 5, pp.603-619.  Download Article   BibTeX Reference

Modeling Motor Behavior

Playing table tennis is a difficult motor task that require fast movements, accurate control and adaptation to task parameters. Although human beings see and move slower than most robot systems, they significantly outperform all table tennis robots. One important reason for this higher performance is the human movement generation. In this project, we study human movements during a table tennis match and present a robot system that mimics human striking behavior. Our focus lies on generating hitting motions capable of adapting to variations in environmental conditions, such as changes in ball speed and position.

Muelling, K.; Kober, J.; Peters, J. (2011). A Biomimetic Approach to Robot Table Tennis, Adaptive Behavior Journal, 19, 5.   Download Article   BibTeX Reference