Research

In this page, I have summarized several projects that I have been conducted at SIIT, ARAI laboratory, and Takemura laboratory. Robotics and its practical uses are my goals of my study. 

Haptic Display Using a Drone


Encountered-type haptic displays recreate realistic haptic sensations by producing physical surfaces on demand for a
user to explore directly with his or her bare hands. However, conventional encountered-type devices are fixated in the environment thus the working volume is limited. To address the limitation, we investigate the potential of an unmanned aerial vehicle (drone) as a flying motion base for a non-grounded encountered-type haptic device. As a lightweight end-effector, we use a piece of paper hung from the drone to represent the reaction force. Though the paper is limp, the shape of paper is held stable by the strong airflow induced by the drone itself. We conduct two experiments to evaluate the prototype system. The first experiment evaluates the reaction force presentation by measuring the contact pressure between the user and the end-effector. The second experiment evaluates the usefulness of the system through a user study in which participants were asked to draw a straight line on a virtual wall represented by the device.

Keywords

Hybrid Walking and Flying Robot for Bridge Inspection


We propose a novel design and concept of hybrid (integrated walkability and flyability) robot for steel bridge inspection and maintenance. Our proposed design allow the robot to access a 3D structure without being time-consuming
In order to stabilize our robot in 3D space, we present a vibration control based on a vibrator to compensate vibration generated from joint actuators when the robot is flying. We present a preliminary experiment on how our robot performs obstacle avoidance along with a simulation of flying performance of a hybrid robot when the vibration was compensated based on LQG control.

Adaptive View for Drone Teleoperation

Drone navigation in complex environments poses many problems to teleoperators. Especially in 3D structures like buildings or tunnels, viewpoints are often limited to the drone's current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and smooth user operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D pointcloud information into account to modify user-viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera and we use resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.

Cloud Robotics


This project was submitted by RoboSamurai team to the Cloud Robotics Hackathon 2013. The aim of the competition was to create useful robotic applications that use natural and social human interactions and robot-to-robot collaboration by using cloud-computing and web services. Our projects consited in having one robot that monitors human activity and shares the information to the cloud. In particular, the robot monitored when a person as doing exercise and this activity would be shared to the cloud as a graph. Other robot connected to the cloud, and used the information to encourage the person. Comment from the judges: Team RoboSamurai, truly exploited the full potential of MyRobots.com by a clever usage of the platform. Their application solves a real-life problem, is well implemented, is presented in an engaging ways and, most importantly, it features robot-to-robot collaboration robot-to-human collaboration, and monitoring. By using the specific capabilities of several robots the team is able to track a human working out and encourage him along the way. For this unique application, they win the first prize of 1500$!

Social interactive robot navigation


Robot navigation in a human environment is challenging because human moves according to many factors such as social rules and the way other moves. By introducing a robot to a human environment, many situations are expected such as human want to interact with robot or humans expect a robot to avoid a collision. Robot navigation modeling have to take these factors into consideration. This paper presents a Social NavigationModel (SNM) as a unified navigation and interaction model that allows a robot to navigate in a human environment and response to human according to human intentions, in particular during a situation where the human encounters a robot and human wants to avoid, unavoid (maintain his/her course), or approach (interact) the robot. The proposed model is developed based on human motion and behavior (especially face orientation and overlapping personal space) analysis in preliminary experiments of human-human interaction. Avoiding, unavoiding, and approaching trajectories of humans are classified based on the face orientation and predicted path on a modified social force model. Our experimental evidence demonstrates that the robot is able to adapt its motion by preserving personal distance from passers-by, and interact with persons who want to interact with the robot with a success rate of 90 %. The simulation results show that robot navigated by proposed method can operate in populated environment and significantly reduced the average of overlapping area of personal space by 33.2%and reduced average time human needs to arrive the goal by 15.7%compared to original social force model. This work contributes to the future development of a human-robot socialization environment.