In recent times, many human-like robots have been developed and used practically. Accordingly, there is a strong need for such robots to be capable of operating in a human environment. Unlike the straightforward factory assembly line environments in which industrial robots operate, the human environment is more complex and unstructured. To adapt to this environment, robots require a perception ability similar to that of humans.
When performing tasks, robots use tactile sensors that are typically installed on their end-effectors. Therefore, control algorithms are designed for performing tasks using the end-effector. In contrast, when performing tasks, humans strongly rely on the sense of touch experienced through their arms, legs, or torso. In particular, when humans face difficulty in securing a clear view, they preferentially use the tactility acquired from the body.
Consider a human aiming to sit down in a chair. A person would roughly know the position of the chair behind him/her and the approach to take; however, the person must also use the tactile response of the body to estimate the chair position and make necessary corrections, because the sense of vision is not available. For human-like robots, Hirukawa et al. (2005) and Ogura et al. (2006) indicated that it is similarly far more beneficial and natural to sense and utilize contacts along the entire body and links.
Many studies have focused on control theory for contact-basedmanipulation. Park (2006) established control strategies for robots in contact. Park and Khatib (2008) demonstrated multiple contact control. Sentis et al. (2010) proposed a compliant control strategy for humanoids in multi-contact by analyzing internal forces and center-of-mass. However, most studies assume that the robot can recognize the contact location on the link using the touch sensors. In other words, contact location estimation has thus far been considered only using touch sensors. Many different types of tactile sensors have been developed previously (Howe, 1993; Suwanratchatamanee et al., 2010; Ponce Wong et al., 2012), but these cannot easily be applied to large robotic systems owing to cost and implementation issues.
This study proposes an alternative approach to traditional force and tactile sensing. This approach relies on the use of robot geometry and kinematics to reveal a contact’s location. Petrovskaya et al. (2007) previously studied a probabilistic approach for contact estimation without a tactile sensor by considering a point contact situation with an edge.
Although contact is an essential interaction for manipulators, most robotic systems rely on a priori positional information to perform tasks. In fact, current robotic systems do not adequately sense or use contact information. To realize robust physical interactions with the environment, the robot should allow contact between the environment and any of the links. However, typical position control approaches only consider contact through the end-effector, and they do not consider the system dynamics. In this study, to overcome these issues, new sensing algorithms and control approaches are developed to simultaneously identify the contact position and control the robot in more general contact situations.
Publication
Hooman Lee, Jaeheung Park, An Active Sensing Strategy for Contact Location without Tactile Sensors Using Robot Geometry and Kinematics, Autonomous Robots, Vol. 36, Issue 1-2, pp. 109-121, Jan 2014.
As recent robot technology is actively progressing, service robots such as Roomba[1] have been developed and drawn sensational popularity. These robots, however, perform particular tasks depending on the application it is designed for. Future consumers may want some all-round robots that can be applied to broad services. The robot might be clever enough to carry out a range of tasks and adaptable for the user to train it to carry out new tasks.
To make the best use of the robot, a simple and intuitive method is necessary to allow humans teach the robots to carry out new skills or modify existing skills. In order to achieve such a robot system, at least two major problems have to be solved. Fist, how should the human user teach the robot new or modified skills. Second, how should the robot learn these new skills or modify its existing skills. These two problems are considered together as "Robot Teaching" in this research.
There are some in-depth reviews of each robot teaching method[2][3][4]. But there is still appeared to have no comprehensive survey of such methods which will help us. A comprehensive survey of various robot teaching methods is necessary to understand what has been achieved in this domain and provide a insight map of the robot teaching methods for researcher. I surveyed this domain primarily and I think my paper[5] could give you some rough knowledges of this domain.
[1] IRobot, http://www.irobot.com/en/us/robots/home/roomba.aspx
[2] Billard, A. and S. Calinon, ”Robot Programming by Demonstration”Handbook of Robotics. B. Siciliano and O. Khatib, SPRINGER, 2008
[3] Billard, Argall, B. D., S. Chernova, M. Veloso, B. Browning, ”A Survey of Robot Learning from Demonstration” Robotics and Autonomous Systems Vol. 57(Sissue 5), 2009.
[4] Argall, B. D. and A. G. Billard, ”A survey of Tactile Human-Robot Interactions” Robotics and Autonomous Systems 58(10): 1159-1176, 2010.
[5] Lee, H. and Kim, J., "A survey on Robot Teaching : Categorization and Brief Review", Applied Mechanics and Materials, Vol. 330, pp. 648-656, 2013.
Intuitive robot programming methods I - Phantom Omni
Intuitive robot programming methods II - Development of Upper-limb Exoskeleton
In this paper we present a master-slave system to program dual arm industrial robots. Since traditional robot motion programming apparatus, teach pendants, usually deal with single arm motions, it is difficult to teach dual arm motions requiring synchronization of the arms. Furthermore, a teach pendant is 1-dimensional position input device which is not sufficient to teach 3-dimensional motions so that it requires
quite much time for teaching complex motions. To deal with this problem, we developed a passive exoskeleton as a master device for a master-slave teleoperation system. The proposed teleoperation-based robot programming method was verified through experiments on via point teaching. An operational space framework is adopted to control the slave arm.
Intuitive robot programming methods III - Robot Teaching System