Namiko SAITO -Robotics & AI

Journal Articles (Peer-reviewed)

5. Namiko Saito, Kiyoshi Yoshinaka, Shigeki Sugano and Ryosuke Tsumura, ``Autonomous scanning motion generation adapted to individual differences in abdominal shape for robotic fetal ultrasound,'' Advanced Robotics, 38:3, pp.182--191, 2024, doi: 10.1080/01691864.2024.2315058

4. Namiko Saito, Takumi Shimizu, Tetsuya Ogata and Shigeki Sugano, ``Utilization of Image/Force/Tactile Sensor Data for Object-Shape-Oriented Manipulation: Wiping Objects With Turning Back Motions and Occlusion,'' in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 968-975, 2022, doi: 10.1109/LRA.2021.3136657. (IF: 3.741)

Presented in 2022 International Conference on Robotics and Automation (ICRA 2022), 2022. (in Philadelphia, the USA) Travel Award [[video]]

3. Tamon Miyake, Tomoyuki Suzuki, Satoshi Funabashi, Namiko Saito, Mitsuhiro Kamezaki, Takahiro Shoda, Tsutomu Saigo and Shigeki Sugano, ``Bayesian Estimation of Model Parameters of Equivalent Circuit Model for Detecting Degradation Parts of Lithium-Ion Battery,'' in IEEE Access, vol. 9, pp. 159699-159713, 2021, doi: 10.1109/ACCESS.2021.3131190.

2. Namiko Saito, Tetsuya Ogata, Hiroki Mori, Shingo Murata and Shigeki Sugano, ``Tool-use Model to Reproduce the Goal Situations Considering Relationship among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks,'' Frontiers in Robotics and AI, vol. 8, September, 2021. (IF: 4.33) DOI=10.3389/frobt.2021.748716, ISSN=2296-9144  [[video]]

1. Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori and Shigeki Sugano, ``How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning,'' In the IEEE Robotics and Automation Letters (RA-L), vol. 6, no. 2, pp. 2517-2524, April 2021, doi: 10.1109/LRA.2021.3062004. (IF: 3.608), (arxiv.org/abs/2106.02445)

Presented in 2021 International Conference on Robotics and Automation (ICRA 2021), 2021. (in Xi'an, China (virtual-online), Acceptance rate 43.59%, Best Paper Award of Cognitive Robotics) [[video]]

International Conference (Peer-reviewed)

7. Marina Y. Aoyama, Joao Moura, Namiko Saito and Sethu Vijayakumar, Few-Shot Learning of Force-Based Motions From Demonstration Through Pre-training of Haptic Representation, In Proceedings of 2023 IEEE International Conference on Robotics and Automation (ICRA 2024), 2024. (in Yokohama, Japan) (arXiv/abs/2309.04640)

6. Namiko Saito, Joao Moura, Tetsuya Ogata, Marina Y. Aoyama, Shingo Murata, Shigeki Sugano, and Sethu Vijayakumar, ``Structured Motion Generation with Predictive Learning: Proposing Subgoal for Long-Horizon Manipulation,'' In Proceedings of 2023 IEEE International Conference on Robotics and Automation (ICRA 2023), pp. 9566-9572, 2023. (in London, U.K., acceptance rate: 43.04%,) doi: 10.1109/ICRA48891.2023.10161046. [[video]]

5. Namiko Saito, Dangyang Wang, Tetsuya Ogata, Hiroki Mori and Shigeki Sugano, ``Wiping 3D-objects using Deep Learning Model based on Image/Force/Joint Information,'' In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS 2020), pp. 10152-10157, 2020. (in Las Vegas, the USA (online), Acceptance rate 47%) doi: 10.1109/IROS45743.2020.9341275. [[video]]

4. Peizhi Zhang, Namiko Saito, Hiroki Shigemune, Shigeki Sugano, ``Development of a Lightweight Deformable Surface Mechanism (DSM) by Applying Shape-Memory Alloy (SMA) and the Sponge for Handling Objects,'' In Proceedings of the IEEE International Conference on Systems,  Man, and Cybernetics (SMC 2020), pp. 1406-1411, 2020.  (in Tronto, Canada (online)) doi: 10.1109/SMC42975.2020.9283459

3. Namiko Saito, Nguyen Ba Dai, Tetsuya Ogata, Hiroki Mori and Shigeki Sugano,``Realtime Liquid Pouring Motion Generation: End-to-End Sensorimotor Coordination for Unknown Liquid Dynamics Trained with Deep Neural Networks,'' In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO 2019), pp. 1077-1082, 2019.  (in Dali, China, Acceptance rate: 54%) doi: 10.1109/ROBIO49542.2019.8961718. [[video]]

2. Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata and Shigeki Sugano, ``Tool-use Model Considering Tool Selection by a Robot using Deep Learning,'' In Proceedings of the 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids 2018), pp. 270-276, 2018.  (in Beijing, China, Acceptance rate for oral presentation was 19.5%) doi: 10.1109/HUMANOIDS.2018.8625048.  [[video]]

1. Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata and Shigeki Sugano, ``Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning,'' In Proceeding of IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-Epirob 2018), pp. 1-6, 2018.  (in Tokyo, Japan) doi: 10.1109/DEVLRN.2018.8761029. (arxiv.org/abs/1809.08613)

 Preprint (arXiv)

Namiko Saito, Joao Moura, Hiroki Uchida, and Sethu Vijayakumar, ``Latent Object Characteristics Recognition with Visual to Haptic-Audio Cross-modal Transfer Learning'', Mar 2024. arxiv.org/abs/2403.10689

Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki Sugano, and Tetsuya Ogata, ``Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot,'' Sep 2023. arxiv.org/abs/2309.14837 

 Workshop Paper in International Conference

2. Marina Y. Aoyama, Joao Moura, Namiko Saito and Sethu Vijayakumar, ``Few-shot Semi-supervised Learning From Demonstration for Generalisation of Force-Based Motor Skills Across Objects Properties,'' 2023 IEEE ICRA workshop: Embracing contacts. Making robots physically interact with our world. June 2nd, 2023. (in London, U.K.) 

1. Marina Y. Aoyama, Joao Moura, Namiko Saito and Sethu Vijayakumar, ``Learning to adapt behaviours in force-based manipulation of soft object,'' 2022 IEEE IROS workshop, Oct 23th, 2022. (in Kyoto, Japan)

Domestic Conference in Japan 

11. 久保杏由南, 斎藤菜美子, 鈴木彼方, 伊藤洋, 尾形哲也, 菅野重樹対象物の特徴の共有による複数タスク動作生成のための深層学習モデル, 日本機械学会ロボティクスメカトロニクス講演会,2A1-H05, 名古屋,  2023年6月30日.

10. 平本万結, 斉藤菜美子, 尾形哲也, 菅野重樹料理ロボットのための注意機構を用いたリアルタイム能動知覚モデル-食材の特徴に応じた料理のかき混ぜ動作の実現-, 情報処理学会第84回全国大会,4V-08,オンライン,2022年3月4日. 学生奨励賞

9. 清水拓実,斎藤菜美子,尾形哲也,菅野重樹:触覚/運動情報を用いた立体オブジェクトにおける拭き取り動作の学習,情報処理学会第83回全国大会,2Q-04,オンライン,2021年3月18日.

8. 樋園翼,斎藤菜美子,森裕紀,村田真悟,出井勇人,尾形哲也,菅野重樹:RNNを用いた予測不確実性と予測変化に基づく好奇心による行動選択モデルの提案, 日本発達神経科学会第9回学術集会, 2020.


7. 小栗滉貴,庄野修,斎藤菜美子,尾形哲也,菅野重樹:3DCNN と Temporal Skip Connection を利用したビデオ予測モデルの提案と評価,日本ロボット学会第38回学術講演会, 1B1-01, オンライン, 2020年10月9日.

6. 斎藤菜美子,尾形哲也,森裕紀,菅野重樹:料理ロボットのための道具使用深層学習モデル-食材の能動知覚に基づく道具の選択と使いこなし,日本機械学会ロボティクスメカトロニクス講演会,1P1-G03, 2020年5月28日.

5. 斎藤菜美子,呉雨恒,尾形哲也,森裕紀,王丹阳,陽品駒,菅野重樹:料理ロボットのための道具の選択・使用深層学習モデル – 道具と食材の配置に応じた料理のよそい動作の実現,情報処理学会第82回全国大会,5U-08,金沢工業大学,2020年3月6日.

4. 王丹陽,斎藤菜美子,尾形哲也,菅野重樹:深層学習を用いたロボットによる複雑形状対象の拭き取りタスクモデルの構築,情報処理学会第82回全国大会,5U-09,金沢工業大学,2020年3月6日. 学生奨励賞

3. Dai Ba Nguyen, Namiko Saito, Tetsuya Ogata, and Shigeki Sugano, ``Liquid Estimation Model for Robot Pouring Skill using Active Motion and Deep Learning,'' the 36th conference of the The Robotics Society of Japan, 2018.

2. Namiko Saito, Kitae Kim, Dai Ba Nguyen, Shingo Murata, Tetsuya Ogata and Shigeki Sugano, ``Tool-use Model Considering Selecting Tool by Deep Learning,'' the 32th Annual Conference of the Japanese Society for Artificial Intelligence, 2018.

1. 斎藤菜美子,金杞泰,村田真悟,尾形哲也,菅野重樹:深層学習を用いた道具と物体の関係性を考慮した道具使用モデル,日本機械学会ロボティクスメカトロニクス講演会,1A1-D07,2018年6月4日.