We develop a new deep reinforcement learning framework that can achieve safe, sample efficient, real-world robot control
We propose a method for designing search behaviors to enable fast and efficient recognition of object shapes and features based on tactile information.
We develop a framework for learning physical assistive strategies for exoskeleton robots solely from data collected from physical interactions between humans and robots.
When a human and a robot perform a cooperative task, a function to generate online behaviors that adapt to the human's behavior is required. We developed a learning method for behavior primitives.
We address the problem of planning food serving strategies using an imitation learning approach from expert demonstrations.
We developed a reinforcement learning system for the difficult task of autonomously controlling a real boat.
The objective of this research is to develop a framework for autonomously optimizing the control policy of a waste crane installed in a waste incineration plant through trial and error. We propose a framework for optimizing the policy parameters of a parameterized control policy using Multi-Task Robust Bayesian Optimization (MTRBO).
We develop contact-safe Model-based Reinforcement Learning (MBRL) for robot applications that achieves contact-safe behaviors in the learning process.
We develop an action planning and object recognition method for a robot to efficiently find a target object in a cluttered environment.
we study a novel policy search reinforcement learning algorithm that can deal with multimodality in control policies based on Gaussian processes.
We study the first imitation learning framework that incorporates Bayesian variational inference for learning flexible nonparametric multi-action policies, while simultaneously robustifying the policies against sources of error, by introducing and optimizing disturbances to create a richer demonstration dataset.
We develop a deep reinforcement learning framework that can balance performance and communication savings for multi-agent cooperative tasks.