Workshop on Robotic Food Manipulation

at HUMANOIDS 2019

Speakers

Invited Speakers

This WS invites researchers of component technologies for robotic food manipulation and their applications, which will lead the discussions of open challenges of related fields.

Robot hand for food manipulation

Shinichi Hirai

Ritsumeikan University

Slides

Title: Soft Robotic Approach to Food Material Manipulation

Abstract: We will show soft robotic hands for food material manipulation. Food material handling in food industry is still performed by humans. With the recent shortage of human resources, automatic handling of food materials is strongly required. One technical barrier against automatic food handling lies in grasping. Food materials have much variation in their shapes, dimensions, softness, and surface properties. One promising approach to cope with such variation is to introduce soft materials such as elastomers and fibers to robotic hands. Namely, soft materials work as the physical interface between a hand and a food material to absorb the variation. In this presentation, we will introduce soft robotic hands for food material manipulation, including prestretched hands, wrapping grippers, circular shell gripper, and binding hands.

Gen Endo

Tokyo Institute of Technology

Slides

Title: Mechanisms of food handling grippers –Practical applications and the state of the art academic researches–

Abstract: In recent years, industrial robots have been applied to food production because of its huge potential market. Although a cooking large amount of foods are done by specially designed cooking machines, dishing up the cooked foods remains as a labor-intensive task. In this talk, I will introduce various types of gripper mechanisms aiming to directly handle foods. Firstly, application examples of food production are discussed. Secondly, the state of the art academic researches on food handling is presented. In particular, I will introduce a two degree-of-freedom multi-fingered gripper with a sliding push part, which is capable of grasping shredded vegetables and noodles. This gripper considers appetizing presentation when grasped food is placed on a dish. Finally, we will discuss the remaining open issues and future research directions.

Food manipulation skills and computer vision

Akihiko Yamaguchi

Tohoku University

Slides

Title: Reinforcement Learning with Skill Library for Complicated Manipulation

Abstract: Recent research progress makes artificial intelligence tools stronger, but it is still an open challenge to achieve complicated manipulation tasks such as cooking. Based on our research experience, we believe that representing and reasoning about alternative skills is a key to achieve this challenge. We have developed a model-based reinforcement learning method using a library of skills, and investigated the method by applying it to a pouring and a banana peeling tasks. This talk includes a summary of these work, lessons we have learned, and future directions.

Gary V. McMurray

Georgia Tech Research Institute

Slides

Title: Challenges in Grasping and Manipulation of Food Products

Abstract: Industrial robotic systems have been very successful manipulating objects whose physical properties are known a priori or the objects are rigid and dry. In the food and agricultural domains, robotic systems must be able to work in an unstructured environment where not only is the position and orientation of every product not known ahead of time, but every product can be unique, deformable and even wet. This presents a unique set of challenges to the development and implementation of sensing and grasping solutions. This presentation will discuss the unique set of challenges in this domain and provide examples of systems that integrate advanced perception and control technologies into robotic systems to perform complex tasks like grasping and manipulation of food and agricultural products.

Yezhou Yang

Arizona State University

Title: Visual Recognition beyond Appearance, and its Applications in the Kitchens

Abstract: The goal of Computer Vision, as coined by Marr, is to develop algorithms to answer What are Where at When from visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. The talk will present the speaker's efforts over the last several years, ranging from 1) hidden entities recognition (such as action fluent, human intention and force prediction from visual input), through 2) reasoning beyond appearance for solving image riddles and visual question answering, till 3) their roles in a Robotic visual learning framework as well as for Robotic Visual Search, with applications in the Kitchens. The talk will also feature several ongoing projects and future directions among the Active Perception Group (APG) with the speaker at ASU School of Computing, Informatics, and Decision Systems Engineering (CIDSE).

Bio: Yezhou Yang is an Assistant Professor at School of Computing, Informatics, and Decision Systems Engineering, Arizona State University. He is directing the ASU Active Perception Group. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots. Before joining ASU, Dr. Yang was a Postdoctoral Research Associate at the Computer Vision Lab and the Perception and Robotics Lab, with the University of Maryland Institute for Advanced Computer Studies. He is a recipient of Qualcomm Innovation Fellowship 2011 and the NSF CAREER award 2018. He receives his Ph.D. from University of Maryland, and B.E. from Zhejiang University.

AI methods for food manipulation

Matthew Travers

Carnegie Mellon University

Title: [TITLE]

Abstract: [ABSTRACT]

Michael Spranger

Sony Computer Science Laboratories Inc.

Title: AIxGastronomy - An Industry Perspective

Abstract: Sony has recently released new concepts for deploying AI and robotics technology kitchens at home and in restaurants. Sony's strategy is comprehensive and goes from robots and AI for agriculture, to delivery robots, new tools and robots for preparation of dishes, as well as new tools for recipe, dish and experience generation. Being an entertainment company, Sony's main focus is on enhancing the creativity of gastronomy experience creators and creating unique experiences for food lovers. The talk will introduce and discuss Sony's view on AI and robotics in the food experience industry and discuss R&D on the way in Sony Corporation.

Tapomayukh Bhattacharjee

University of Washington

Title: Robot-Assisted Feeding: An Independent Meal a Day can make you Smile Right Away

Abstract: Robot-assisted feeding can potentially enable people with upper-body mobility impairments to eat independently. However, it has multifaceted technical challenges. Successful robot-assisted feeding depends on reliable bite acquisition and easy bite transfer. Bite acquisition is challenging because it requires manipulation of a variety of deformable hard-to-model food items with various compliance, texture, sizes, and shapes, and thus a fixed manipulation strategy may not work. Bite transfer is not trivial because it constitutes a unique type of robot-human handover where the human needs to use the mouth which places a high burden on the robot to make the transfer easy. This talk will focus on algorithms and technologies used to address these issues of bite acquisition and bite transfer. We first develop a taxonomy of food manipulation relevant to assistive feeding to organize the complex interplay between fork and food. Using insights from the taxonomy, we develop algorithms that leverage multiple sensing modalities to perceive food item properties and determine successful strategies for bite acquisition and transfer. Our autonomous robot-assisted feeding system uses these algorithms that showcase food item dependent manipulation primitives to reliably acquire a variety of novel solid food items and easily feed people with upper-body mobility impairments.

Application

Shunsuke Kudoh

The University of Electro-Communications

Title: Manipulation of Foodstuffs by Cooking Robot

Abstract: Recently, robots are expected to be actively used in various services, including tasks supporting daily life. In comparison to industrial environment, robots in daily life environment must work in unprepared environments and have to manipulate various types of objects. Our research group has conducted several projects aimed at utilizing robots in daily life, and a cooking robot is one of such projects. In the cooking robot project, we use a robot with two general purpose robotic arms. If we focus on a specific cooking task, such as cutting, measuring and mixing, practical machines have already developed and they are working in a lot of industrial lines. However, assuming cooking robots for supporting our daily life, it is not realistic to deploy such specialized machines for every cooking task. Hence, a single robot that can process every cooking task is important for daily life support. In the presentation, we will introduce our cooking robot project.

Masaru Kawakami

Yamagata University

Title: 3D Food printing: Its possibilities and future

Abstract: 3D printing is being used in the manufacturing industry as a unique manufacturing method that utilizes three-dimensional modeling. 3D food printing also have great potential in the food industry, but they are still in the early stages of development, due to limitations of food dispensing technology at this time. The main advantages of 3D food printing are customizability, on-demand productivity and the ability to shape complex structure.

We are developing nursing care food with food 3D printer. Soft nursing food is suitable for use as a food ingredient in 3D printing, and we are using multiple nozzles to change the taste, hardness etc. of the food. We expect that food 3D printers will be applied to the development of innovative foods as well as the reproduction of conventional foods.

YouTube link

Regular Speakers

Ethan Gordon

University of Washington

Title: A Contextual Bandit Framework for Adaptive Robotic Bite Acquisition

Abstract: Successful robot-assisted feeding requires bite acquisition of a wide variety of food items. Different food items may require different manipulation actions for successful bite acquisition. Therefore, a key challenge is to handle previously-unseen food items with very different action distributions. By leveraging contexts from previous bite acquisition attempts, a robot should be able to learn online how to acquire those previously-unseen food items. We construct an online learning framework for this problem setting and use the ε-greedy and LinUCB contextual bandit algorithms to minimize cumulative regret within that setting. Finally, we demonstrate empirically on a robot-assisted feeding system that this solution can adapt quickly to a food item with an action success rate distribution that differs greatly from previously-seen food items.

Kevin Zhang

Carnegie Mellon University

He will present on behalf of Oliver Kroemer about robot learning to cut vegetables.