Research

In this page, I have summarized several projects that I have been conducted at SIIT, Osaka University, ARAI laboratory, and Takemura laboratory. Robotics and its practical uses are the goals of my study. 

Hybrid Walking and Flying Robot for Bridge Inspection


We propose a novel design and concept of hybrid (integrated walkability and flyability) robot for steel bridge inspection and maintenance. Our proposed design allows the robot to access a 3D structure without being time-consuming

In order to stabilize our robot in 3D space, we present a vibration control based on a vibrator to compensate vibration generated from joint actuators when the robot is flying. We present a preliminary experiment on how our robot performs obstacle avoidance along with a simulation of flying performance of a hybrid robot when the vibration was compensated based on LQG control.

  • Photchara Ratsamee, Pakpoom Kriengkomol, Tatsuo Arai, Kazuto Kamiyama, Yasushi Mae, Kiyoshi, Kiyokawa, Tomohiro Mashita, Yuki Uranishi, and Haruo Takemura. "A hybrid flying and walking robot for steel bridge inspection." In Safety, Security, and Rescue Robotics (SSRR), 2016 IEEE International Symposium on, pp. 62-67. IEEE, 2016.

Adaptive View for Drone Teleoperation

Drone navigation in complex environments poses many problems to teleoperators. Especially in 3D structures like buildings or tunnels, viewpoints are often limited to the drone's current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and smooth user operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D pointcloud information into account to modify user-viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera and we use resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.

  • Thomason John, Photchara Ratsamee, Kiyoshi Kiyokawa, Pakpoom Kriangkomol, Jason Orlosky, Tomohiro Mashita, Yuki Uranishi, and Haruo Takemura. "Adaptive View Management for DroneTeleoperation in Complex 3D Structures." In Proceedings of the 22nd International Conference on intelligent user Interfaces, pp. 419-426. ACM, 2017.

Illumination Invariant Camera Localization

Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the localization algorithm to lighting variation. However, this is very tedious and time-consuming. By using synthesized images it is possible to easily accumulate a large variety of views under varying illumination and weather conditions. Despite continuously improving processing power and rendering algorithms, synthesized images do not perfectly match real images of the same scene, there exists a gap between real and synthesized images that also affects the accuracy of camera localization.

To reduce the impact of this gap, we introduce “REal-to-Synthetic Transform (REST).” which is an autoencoder-like network that converts real features to their synthetic counterpart. The converted features can then be matched against the accumulated database for robust camera localization. In our experiments, REST improved feature matching accuracy under variable lighting conditions by approximately 30%. Moreover, our system outperforms state of the art CNN-based camera localization methods trained with synthetic images. We believe our method could be used to initialize local tracking and to simplify data accumulation for lighting robust localization.
  • Sota Shoman, Tomohiro Mashita, Alexander Plopski, Photchara Ratsamee, Yuki Uranishi, and Haruo Takemura. "REST: Real-to-Synthetic Transform for Illumination Invariant Camera Localization" ECCV2018 submitted. https://arxiv.org/abs/1803.09448

Haptic Display Using a Drone


Encountered-type haptic displays recreate realistic haptic sensations by producing physical surfaces on demand for a user to explore directly with his or her bare hands. However, conventional encountered-type devices are fixated on the environment thus the working volume is limited. To address the limitation, we investigate the potential of an unmanned aerial vehicle (drone) as a flying motion base for a non-grounded encountered-type haptic device. As a lightweight end-effector, we use a piece of paper hung from the drone to represent the reaction force. Though the paper is limp, the shape of paper is held stable by the strong airflow induced by the drone itself. We conduct two experiments to evaluate the prototype system. The first experiment evaluates the reaction force presentation by measuring the contact pressure between the user and the end-effector. The second experiment evaluates the usefulness of the system through a user study in which participants were asked to draw a straight line on a virtual wall represented by the device.

VisMerge: Light Adaptive Vision Augmentation via Spectral and Temporal Fusion of Non-visible Light


Low light situations pose a significant challenge to individuals working in a variety of different fields such as firefighting, rescue, maintenance and medicine. Tools like flashlights and infrared (IR) cameras have been used to augment light in the past, but they must often be operated manually, provide a field of view that is decoupled from the operator's own view, and utilize color schemes that can occlude content from the original scene. 

To help address these issues, we present VisMerge, a framework that combines a thermal imaging head mounted display (HMD) and algorithms that temporally and spectrally merge video streams of different light bands into the same field of view. For temporal synchronization, we first develop a variant of the time warping algorithm used in virtual reality (VR), but redesign it to merge video see-through (VST) cameras with different latencies. Next, using computer vision and image compositing we develop five new algorithms designed to merge non-uniform video streams from a standard RGB camera and small form-factor infrared (IR) camera. We then implement six other existing fusion methods, and conduct a series of comparative experiments, including a system level analysis of the augmented reality (AR) time warping algorithm, a pilot experiment to test perceptual consistency across all eleven merging algorithms, and an in-depth experiment on performance testing the top algorithms in a VR (simulated AR) search task. Results showed that we can reduce temporal registration error due to inter-camera latency by an average of 87.04%, that the wavelet and inverse stipple algorithms were perceptually rated the highest, that noise modulation performed best, and that freedom of user movement is significantly increased with visualizations engaged.

  • Orlosky, J., Kim, P., Kiyokawa, K., Mashita, T., Ratsamee, P., Uranishi, Y., & Takemura, H. (2017, October). VisMerge: Light Adaptive Vision Augmentation via Spectral and Temporal Fusion of Non-visible Light. In Mixed and Augmented Reality (ISMAR), 2017 IEEE International Symposium on (pp. 22-31). IEEE.

Cloud Robotics


This project was submitted by RoboSamurai team to the Cloud Robotics Hackathon 2013. The aim of the competition was to create useful robotic applications that use natural and social human interactions and robot-to-robot collaboration by using cloud-computing and web services. Our projects consisted in having one robot that monitors human activity and shares the information to the cloud. In particular, the robot monitored when a person as doing exercise and this activity would be shared to the cloud as a graph. Other robot connected to the cloud and used the information to encourage the person. Comment from the judges: Team RoboSamurai, truly exploited the full potential of MyRobots.com by a clever usage of the platform. Their application solves a real-life problem, is well implemented, is presented in an engaging way and, most importantly, it features robot-to-robot collaboration robot-to-human collaboration, and monitoring. By using the specific capabilities of several robots the team is able to track a human working out and encourage him along the way. For this unique application, they win the first prize of 1500$!

Social interactive robot navigation


Robot navigation in a human environment is challenging because human moves according to many factors such as social rules and the way other moves. By introducing a robot to a human environment, many situations are expected such as human want to interact with robot or humans expect a robot to avoid a collision. Robot navigation modeling has to take these factors into consideration. 

This work presents a Social NavigationModel (SNM) as a unified navigation and interaction model that allows a robot to navigate in a human environment and response to human according to human intentions, in particular during a situation where the human encounters a robot and human wants to avoid, unavoid (maintain his/her course), or approach (interact) the robot. The proposed model is developed based on human motion and behavior (especially face orientation and overlapping personal space) analysis in preliminary experiments of human-human interaction. Avoiding, unavoiding, and approaching trajectories of humans are classified based on the face orientation and predicted path on a modified social force model. Our experimental evidence demonstrates that the robot is able to adapt its motion by preserving personal distance from passers-by and interact with persons who want to interact with the robot with a success rate of 90 %. The simulation results show that robot navigated by proposed method can operate in populated environment and significantly reduced the average of the overlapping area of personal space by 33.2%and reduced average time human needs to arrive the goal by 15.7%compared to original social force model. This work contributes to the future development of a human-robot socialization environment.

  1. Photchara Ratsamee, Yasushi Mae, Kenichi Ohara, Tomohito Takubo, and Tatsuo Arai,
    “Human-Robot Collision Avoidance using A Modified Social Force Model with Body Pose and Face
    Orientation”, International Journal of Humanoid Robotics (IJHR), Volume : 10 No. 1 Year : 2013. pdf
  2. Photchara Ratsamee, Yasushi Mae, Masaru Kojima, Mitsuhiro Horade, Kazuto Kamiyama and Tatsuo Arai, “Social Interactive Robot Navigation based on Human Intention Analysis from Face Orientation and Human Path Prediction”, International Journal of Robot and Mechatronics (ROBOMECH), Year : 2015. pdf
  3. Ratsamee, P.; Mae, Y.; Ohara, K.; Kojima, M.; Arai, T., ”Social navigation model based on human
    intention analysis using face orientation,” IEEE/RSJ International Conference on Intelligent Robots
    and Systems (IROS), pp.1682-1687, 3-7 Nov. 2013 pdf
  4. Photchara Ratsamee, Yasushi Mae, Kenichi Ohara, Tomohito Takubo, and Tatsuo Arai, “Modified
    Social Force Model with Face Pose for Human Collision Avoidance”, in Proc. of International
    conference on Human Robot Interaction (HRI) : LBR Highlights (2012), pp. 215-216. pdf
  5. Photchara Ratsamee, Yasushi Mae, Kenichi Ohara, Tomohito Takubo, and Tatsuo Arai, “People
    Tracking with Body Pose Estimation for Human Path Prediction”, in Proc. of International
    conference on Mechatronics and Automation (ICMA) (Chengdu, China, 2012), pp.1915-1920. pdf