Everyday applications are bringing robots from their traditional caged environments in industrial setups into human environments such as houses, hospitals, museums, etc. where they are expected to assist us in our daily life tasks. Such human-inhabited environments are highly unstructured, dynamic and uncertain. Indeed, robots are now required to autonomously interact with the environments and physically cooperate with people, therefore increasing the demand for reliable perception, planning and control subsystems. In this direction, there are several fundamental research problems, such as how to design control systems that have to work with potentially uncertain environment models and uncertain sensory feedback, deal with unpredictable and complex interactions, and adapt in real-time. These kinds of scenarios are especially critical when robots are interacting and cooperating with humans.
We can deal with such problems from two different perspectives: control theory and machine learning. Control approaches are based on models and sensory feedback, and provide analytic solutions, where the models are often simplified computational representations. Classical robot systems are mostly characterized by high gain negative error feedback control that is not suitable for tasks that involve interaction with the environment (possibly humans) because of possible high impact forces. The use of impedance control or even variable impedance control (VIC) provides a feasible solution to overcome position uncertainties and subsequently avoid large impact forces, since robots are controlled to modulate their motion or compliance according to force and visual perception. However, we still need to avoid hard-coding such skills. Thus, how can robots perform such skills while avoiding hard-coding? Or, how can robots acquire knowledge and use it to perform such skills intelligently?
Robot learning provides suitable approaches to learn VIC skills from human demonstrations (Learning from Demonstration - LfD) or through exploration. LfD has been widely studied as a convenient way to transfer human skills to robots. This learning approach is aimed at extracting relevant motion patterns from human demonstrations and subsequently applying these patterns to different situations. Therefore, why are we not combining the two techniques and subsuming their advantages? In principle, learning and impedance control would give us the ability to enhance robot manipulation performance and safety in unstructured environments, and better handling of perturbations during the interaction. In other words, a robot that can acquire knowledge autonomously and use it to perform tasks intelligently will be of more use than robots that are hard-coded to perform some tasks repetitively. We believe that intelligent robots can enforce safety and reliability, while building upon the principle of explainable AI.
The aim of this workshop is to bring together experts as a means to examine current research on how to successfully transfer compliant motions from humans to robots, allowing for safe and energy-efficient interactions. In this manner, we enable robots to perform in many scenarios, not only the ones that need physical interaction with the human but also in industrial settings. In addition, this workshop targets the sharing of experiences and achievements of bringing robots to human daily life from scientists and researchers from both control theory and machine learning fields. Last but not least, the workshop aims to discuss the state-of-the-art of variable impedance robot skills, the advantages that can be achieved from robot learning methods, and the future research directions.