In this page, we will introduce 2 cooking-related research from Sugano lab and Ogata lab, Waseda Univerisity, Japan.
The humanoid robot, nextage open performs ingredients scooping and pouring tasks, using a turner or ladle to transfer an ingredient from a pot to a bowl. The robot recognizes object characteristics and serves even when the target ingredients are unknown (e.g. milk, grains, fish etc.).
We focus on active perception using multimodal sensorimotor data while a robot interacts with ingredients and allows the robot to recognize their extrinsic (shape, size, colour etc.) and intrinsic (weight, friction, viscosity etc.) characteristics. We construct a deep neural networks model that learns to recognize ingredient characteristics, acquires tool–object–action relations, and generates motions for tool selection and handling.
This work was presented in IEEE RA-L and ICRA 2021 and got the best paper in cognitive robotics [1].
[1] Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori and Shigeki Sugano, "How to select and use tools?: Active Perception of Target Objects Using Multimodal Deep Learning", In the IEEE Robotics and Automation Letters (RA-L), vol. 6, no. 2, pp. 2517-2524, 2021, doi: 10.1109/LRA.2021.3062004.
Please check the preliminary work in ArXiv [2]. Additionally, Waseda University with Moonshot Program goal 3 will demonstrate this work in ICRA, so please check it out in the venue!
[2] Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki Sugano, Tetsuya Ogata, "Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot," arXiv 2023. arxiv.org/abs/2309.14837
(Writer: Namiko Saito, 15th/Mar/2024)