Katharina Muelling

I am a System Scientist in the Robotics Institute at Carnegie Mellon University. My research revolves around creating (semi-) autonomous robot manipulation. I am particularly interested in algorithms that enable robots to perform complex motor tasks by interacting with and learning from humans. My research interests include manipulation, planning, robot learning architectures, machine learning, human-robot interaction, motor control and intelligent prosthetics. I joined CMU as a Project Scientist at the National Robotic Engineering Center (NREC) in 2013 after completing my Ph.D. studies at the Max-Planck Institute for Intelligent Systems in the departments of Empirical Inference and Autonomous Motion.

Research Areas

Autonomy Infused Teleoperation

Teleoperating a robot arm to perform fine manipulation and every day living tasks can be hard due to challenges including latency, intermittency, and asymmetry in control inputs. These problems are particularly relevant when the user is physically impaired and has to control the arm through input devices such as brain computer interfaces, EMG, or 2D joysticks. The combination of autonomous robot technologies and direct user control inĀ  a shared-control teleoperation framework can help to overcome the involved challenges. Our autonomy infused teleoperation architecture combines computer vision, user-intent inference and human robot arbitrationĀ  in order to produce supervised autonomous manipulation. The goal driving this work is to produce intuitive control that enables human-like manipulation with noisy and sometimes erratic input signals.

Motor Skill Learning

Robotic systems that are able to perform various tasks in human-inhabited and unstructured environments require robust movement generation and manipulation skills that compensate for uncertainties and disturbances in the environment. Such systems need to autonomously adapt to a highly dynamic environment while simultaneously accomplishing the task at hand. I am interested in developing machine learning algorithms for learning motor skills that can circumvent the limits of analytical engineered solutions. A fundamental problem for the development of robot learning methods is the necessity to achieve complex behaviors with a feasible amount of training data. Human demonstrations can be used to initialize robot learning approaches and reduce the learning time significantly. Furthermore, it provides a natural way for humans to teach robots and allows robots to acquire human-like behavior which is beneficial for human-robot interaction.

Learning Higher-Level Behavior

A motor behavior is always directed towards achieving a specific goal. In
table tennis, this goal is winning the game. But what is the best way to achieve this goal? While symbolic planning has been successfully applied in many classical AI areas, they fail to scale to real robot behaviors especially if human interactions are involved. This is due to their limitations to model the uncertainty of actions, to address geometric and kinematic constraints and to model human behavior. I am interested in modeling and learning such higher-level decision processes to enable efficient and human-like robot behavior and problem solving.

Social and Interactive Motion Planning

Creating autonomous and intelligent systems that are able to move out of the factory floors into human inhabited environments bears many challenges. While the state-of-the art in robot motion planning has made tremendous progress in planning in high-dimensional and even dynamic environments, it is still hard for robots to navigate through a crowded environment and to interact with humans in a safe and socially acceptable manner. To enable the robot systems to work with and close by humans they (i) need to be able to infer the intent of the human and to integrate it in an efficient manner into the planning process, (ii) behave in an human understandable manner, (iii) interact with the human in a social manner.