IJCAI 2019 Workshop on AIxFood

August 11, 2019 | Macau, China


ROOM: Sicily 2406

The tasks of cooking and eating present exciting research and application challenges for AI systems. Research on food preparation, delivery, and consumption build on universal human experiences and address critical human needs. At the same time, successfully developing AI systems for cooking and eating presents many challenges across the spectrum of AI: knowledge representation, machine learning, planning, robotics, assistive technologies, human-computer and human-robot interaction. The AI&Food workshop will include both industry and academic perspectives, and targets researchers and practitioners from any field of AI with an interest in food and cooking.

This workshop is a part of the 2019 International Joint Conferences on Artificial Intelligence (IJCAI-19).

Dates

  • Abstract submission deadline: 30 April 2019 15 May 2019 AoE (extended!)
  • Acceptance notifications: 7 May 2019 23 May 2019
  • Camera ready deadline: 23 June 2019
  • Workshop date: August 11

Schedule

08:30 - 09:00 Invited talk: Masahiro Fujita on AI x Gastronomy (industry perspective)

09:00 - 09:25 Invited talk: Donghyeon Park on The KitcheNette system for predicting food pairings

09:25 - 10:00 Lightning talks

10:00 - 11:00 Coffee break / poster session

11:00 - 11:45 Invited talk: Gary McMurray on Advanced Perception and Control for Robots in Food and Agriculture

11:45 - 12:10 Invited talk: Oliver Kroemer on Adapting and Monitoring Robotic Cutting Skills

12:10 - 12:30 Discussion and wrap up

12:30 - 14:00 Lunch

Invited Speakers

Title: AI x Gastronomy (industry perspective)

Abstract: Sony has recently released new concepts for deploying AI and robotics technology kitchens at home and in restaurants. Sony`s strategy is comprehensive and goes from robots and AI for agriculture, to delivery robots, new tools and robots for preparation of dishes, as well as new tools for recipe, dish and experience generation. Being an entertainment company, Sony`s main focus is on enhancing the creativity of gastronomy experience creators and creating unique experiences for food lovers. The talk will introduce and discuss Sony`s view on AI in the food experience industry and discuss R&D on the way in Sony Corporation .

Bio: Masahiro Fujita joined Sony Corporation in 1981. He started Robot Entertainment project from 1993, and developed the world first fully autonomous entertainment robot AIBO and a small humanoid robot QRIO. He became a director of Sony Intelligence Dynamics Laboratories Inc. established in 2004, where he led a new approach of studies for intelligence, aiming at realizing emergence of intelligence with emphasizing embodiment and dynamics. In 2012, he again started robotics R&D at System Research & Development Group (SRDG), R&D PF. In 2016, he became a head of Technology Strategy Department, where he is in charge of strategy planning of SRDG's technology and incubation development. In addition to R&D position, currently as Senior Chief Researcher, AI Collaboration Office, he leads AI x Robotics x Gastronomy and AI Ethics related technology projects collaborating with universities and startups.

Title: Advanced Perception and Control for Robots in Food and Agriculture

Abstract: Robotic systems have traditionally been very successful in performing tasks where the inputs are well defined and known in advance. Automotive and electronic manufacturing are the classic success stories where robotic systems have demonstrated incredible value for the industry. The tasks performed by the robots almost exclusively involve the manipulation or interacting with objects whose physical properties are known a priori and the objects are rigid and dry. In the food and agricultural domains, robotic systems must be able to work in an unstructured environment where every product is unique, deformable and even wet. This presentation will give examples of systems that integrate advanced perception and control technologies into robotic systems to perform complex tasks like cutting, grasping, and manipulation.

Bio: Gary McMurray is a Principal Research Engineer and Division Chief for the Food Processing Technology Division at the Georgia Tech Research Institute. He is also an Associate Director for the Institute for Robotics and Intelligent Machines (IRIM) at Georgia Tech. IRIM serves as an umbrella under which robotics researchers, educators, and students from across campus can come together to advance the many high-powered and diverse robotics activities at Georgia Tech. The Food Processing Technology Division conducts innovative research in sensors for food quality and food safety, robotics, and water/energy sustainability for the poultry and broader agricultural industries. Mr. McMurray’s research has focused on the development of robotic technologies and solutions for the manufacturing and agribusiness communities, including the protein and the fruit and vegetable industries. He is an expert in visual servoing – the use of vision for the real-time control of robotics, and the author of over 50-refereed technical papers and journal publications in robotics. Mr. McMurray serves on the advisory board for Advanced Animal Systems for the Foundation of Food and Agricultural Research. He also serves on the Board of Directors for the Robotics Industry Association.

Title: The KitcheNette system for predicting food pairings

As a vast number of ingredients exist in the culinary world, there are countless food ingredient pairings, but only a small number of pairings have been adopted by chefs and studied by food researchers. In this work, we propose KitcheNette which is a model that predicts food ingredient pairing scores and recommends optimal ingredient pairings. KitcheNette employs Siamese neural networks and is trained on our annotated dataset containing 300K scores of pairings generated from numerous ingredients in food recipes. As the results demonstrate, our model not only outperforms other baseline models but also can recommend complementary food pairings and discover novel ingredient pairings.

Title: Adapting and Monitoring Robotic Cutting Skills

Abstract: Cutting is a fundamental skill for food preparation. Like many food preparation processes, the cutting task poses a number of challenges for robots. Cutting skills need to generalize between food items with different shapes and material properties. The robot also needs to be able to detect key events during the process, such as when the knife makes contact with the vegetable or the cutting board. In this talk, we will present methods for learning and adapting cutting skills for robots. We will particularly focus on the use of vibration feedback for adapting and monitoring the skill executions.

Bio: Dr. Oliver Kroemer is an assistant professor at the Carnegie Mellon University (CMU) Robotics Institute where he leads the Intelligent Autonomous Manipulation Lab. His research focuses on developing algorithms and representations to enable robots to learn versatile and robust manipulation skills. Before joining CMU, Dr. Kroemer was a postdoctoral researcher at the University of Southern California (USC) for two and a half years. He received his Masters and Bachelors degrees in engineering from the University of Cambridge in 2008. From 2009 to 2011, he was a Ph.D. student at the Max Planck Institute for Intelligent Systems. He defended his Ph.D. thesis on Machine Learning for Robot Grasping and Manipulation in 2014 at the Technische Universitaet Darmstadt, and was a finalist for the 2015 Georges Giralt Ph.D. Award for the best robotics Ph.D. thesis in Europe.

Topics of Interest

The workshop is interested in work on all aspects of AI and food including but not limited to the following topics of interest:

* Planning in the cooking domain for recipe and dish creation

* Classification of dishes, ingredients, and ingredient features (cooking state, quantity, seasoning) from images and videos (computer vision)

* AI and ML in novel recipe generation (computational creativity)

* Knowledge representation for food domain knowledge: recipes, cooking procedure, ingredient interaction, flavour, nutrition, and health facts

* Knowledge extraction for food domain knowledge: ontologies, recipes

* Recipe and food domain knowledge extraction from text (natural language processing)

* Reasoning for changing recipes, dealing with nutritional aspects, health

* Interfaces for monitoring food intake and promoting healthy eating

* Shared autonomy for cooking and eating in the real world

* Interfaces for food related AI systems (HCI)

* Food manipulation and cooking skill learning for robots (robot learning)


Application domains of interest include (but are not limited to):

* Cooking robots

* Assistive feeding robots

* Every day cooking and high-end meal creation

* Recipe retrieval, design and creation

* Cooking for health and sustainability

* Cooking as entertainment

Call for Extended Abstracts

We solicit 2-4 page extended abstracts of past, recently completed or on-going work with contributions to any of the topics above. Extended abstracts are reviewed for suitability by the organizers. If in doubt about whether your extended abstract is in scope for the workshop, please contact us directly. Please format your paper using IJCAI style files.

Submission: 2-4 page extended abstracts

Please submit extended abstracts through: https://cmt3.research.microsoft.com/AIXFOOD2019

You will be able to upload supplementary material (e.g., videos) after submitting the paper by going to the author console in CMT and clicking "Upload Supplementary Materials."

Organizers

Program Committee

  • Chris Atkeson (Carnegie Mellon University)
  • David Dubois (Kinova Robotics)
  • Laura Herlant (iRobot)
  • Ichiro Ide (Nagoya University)
  • George Kantor (Carnegie Mellon University)
  • Stavroula Mougiakakou (University of Bern)
  • Jan Peters (Technische Universitaet Darmstadt)
  • Manuela Veloso (Carnegie Mellon University)
  • Keiji Yanai (University of Electro-Communications, Tokyo, Japan)
  • Kaoru Yoshida (Sony Computer Science Laboratories Inc.)

Contact

If you have any questions, comments or feedback, please contact Michael Spranger at michael [dot] spranger [at] gmail [dot] com