Title: Towards Social Intelligence
Robots have been part of our lives “behind the scenes” for a relatively long time, building our cars and our pieces of electronics. In the last few decades the word is that, they are going to become a more active part of our society. We believe that for this to happen it is necessary that robots acquire a form of social intelligence, starting from the very ability to establish mutual understanding with humans. This skill, a staple of children development, is yet to be fully obtained in current robotic platforms. Important aspects of human social cognition are yet to be understood and taught to artificial agents, as how to establish a shared perception between different agents or which is the cognitive architecture necessary to enable the emergence of social cognition. Although robots are not yet socially intelligent, they might play an important role to advance our understanding of the fundamental aspects of social cognition, for the benefit of robots and humans alike.
Title: Robot-assisted Feeding: Exploring Autonomy for People with Mobility Limitations
Abstract: Robot-assisted feeding can potentially enable people with upper-body mobility impairments to eat independently. Eating free-form food is one of the most intricate manipulation tasks we perform in our daily lives, and manual teleoperation of such an intricate task can be very challenging and time consuming for people with mobility limitations. Successful robot-assisted feeding depends on reliable bite acquisition of hard-to-model deformable food items and easy bite transfer. We focus on an intelligent autonomous solution that leverages multiple sensing modalities to perceive varied food item properties and determines successful strategies for bite acquisition and transfer using algorithms and technologies developed with insights from human studies.
So, where do we stand with respect to long-term real-world deployment? Using feedback from all stakeholders and comparing against a publicly available commercial feeding device, we develop a contextual framework for evaluating assistive devices and identify open problems for our existing technology. Interestingly, we found that there was no clear preference for higher levels of autonomy. But when grouped according to their mobility limitations, ratings from participants with higher mobility limitations were correlated with lower expectations of robot performance and higher levels of autonomy even with perceived errors in potential uncertain and unstructured environments.
Title: Teaching new Behaviors to Robots in Real Conditions using Spoken Language
Abstract: Social robots like Pepper are already found "in the wild". Their behaviors must be adapted for each use case by experts. Enabling the general public to teach new behaviors to robots may lead to better adaptation at lesser cost. In this thesis, we study a cognitive system and a set of robotic behaviors allowing home users of Pepper robots to teach new behaviors as a composition of existing behaviors, using solely the spoken language. Homes are open worlds and are unpredictable. In open scenarios, a home social robot should learn about its environment. The purpose of such a robot is not restricted to learning new behaviors or about the environment: it should provide entertainment or utility, and therefore support rich scenarios. We demonstrate the teaching of behaviors in these unique conditions: the teaching is achieved by the spoken language on Pepper robots deployed in homes, with no extra device and using its standard system, in a rich and open scenario. Using automatic speech transcription and natural language processing, our system recognizes unpredicted teachings of new behaviors, and a explicit requests to perform them. The new behaviors may invoke existing behaviors parametrized with objects learned in other contexts, and may be defined as parametric. Through experiments of growing complexity, we reveal conflicts between behaviors in rich scenarios, and propose a solution based on symbolic task planning and prioritization rules to resolve them. The results rely on qualitative and quantitative analysis and highlight the limitations of our solution, but also the new applications it enables.
Title: The importance of the sense of touch in Human-Robot/Vehicle Interaction
Abstract: Humans rely on the sense of touch for grasping, manipulating, and identifying objects via their
physical properties such as texture, shape, and stiffness.
Compensating the lack of touch with other human senses is hardly feasible. For robotic
systems meant to interact with dynamic environments, recognizing objects’ properties is a
crucial but difficult task for advanced vision techniques due to occlusion, poor lighting
situations, and a lack of precision.
Tactile senses instead can provide a rich and direct feedback with the robotic systems from
abundant simultaneous contact points and a large tactile sensing area. Challenges to
efficiently deploy and utilize a high number of distributed tactile data have so far prevented an
effective utilization of artificial skin technology in robotics. In this talk I present our work in
tactile data processing, robust tactile feature extraction, and active object manifold learning
for our baby robot with a whole body sense of touch (4000 tactile sensors).