Katharina Muelling

I am a System Scientist in the Robotics Institute at Carnegie Mellon University. My work revolves around creating (semi-) autonomous robot manipulation. I am particularly interested in algorithms that enable robots to perform complex motor tasks by interacting with and learning from humans. My research interests include manipulation, planning, robot learning architectures, machine learning, human-robot interaction, motor control and intelligent prosthetics. I joined CMU as a Project Scientist at the National Robotic Engineering Center (NREC) in 2013 after completing my Ph.D. studies at the Max-Planck Institute for Intelligent Systems in the departments of Empirical Inference and Autonomous Motion.


Research Areas

Autonomy Infused Teleoperation

Teleoperating a robot arm to perform fine manipulation and every day living tasks can be hard due to challenges such as latency, intermittency, and asymmetry in control inputs. These problems are particularly relevant when the user is physically impaired and has to control the arm through input devices such as brain computer interfaces, EMG, or 2D joysticks. The combination of autonomous robot technologies and direct user control inĀ  a shared-control teleoperation framework can help to overcome the involved challenges. By combining computer vision, user-intent inference, and human-robot arbitration we strive to create intuitive control that enables human-like manipulation behavior. The research question that we address in this project include intent recognition, motor skill learning, reinforcement learning, user adaptation, and autonomous mode switching.


Motor Skill Learning

Robotic systems that are able to perform various tasks in human-inhabited and unstructured environments require robust movement generation and manipulation skills that compensate for uncertainties and disturbances in the environment. Such systems need to autonomously adapt to a highly dynamic environment while simultaneously accomplishing the task at hand. I am interested in developing machine learning algorithms for learning motor skills that can circumvent the limits of analytical engineered solutions. A fundamental problem for the development of robot learning methods is the necessity to achieve complex behaviors with a feasible amount of training data. Human demonstrations can be used to initialize robot learning approaches and reduce the learning time significantly. Furthermore, it provides a natural way for humans to teach robots motor skills and allows robots to acquire human-like behavior which is beneficial for human-robot interaction.


Learning Higher-Level Behavior

A motor behavior is always directed towards achieving a specific goal. But what is the best way to achieve this goal? While symbolic planning has been successfully applied in many classical AI areas, they fail to scale to real robot behaviors especially if human interactions are involved. This is due to their limitations to model the uncertainty of actions, to address geometric and kinematic constraints and to model human behavior. I am interested in modeling and learning such higher-level decision processes to enable efficient and human-like robot behavior and problem solving skills. In particular, I focus on: (1) developing a forward model to understand environmental changes caused by actions, (2) predicting if and when action can be applied, and (3) developing a hierarchical framework for task planning to facilitate the learning of complex tasks.

Social and Interactive Motion Planning

Creating autonomous and intelligent systems that are able to move out of the factory floors into human inhabited environments bears many challenges. While the state-of-the art in robot motion planning has made tremendous progress in planning in high-dimensional and even dynamic environments, it is still hard for robots to navigate through a crowded environment and to interact with humans in a safe and socially acceptable manner. To enable the robot systems to work with and close by humans they (i) need to be able to infer the intent of the human and to integrate it in an efficient manner into the planning process, (ii) behave in an human understandable manner, and (iii) interact with the human in a social manner. This research project aims to shed light into questions such as: How do we adapt our movements with respect to others? Which humans do we pay attention to when adapting our movements? and How can we create socially acceptable behavior when navigating in crowds.