Institution:
Researcher (Full-time)
MediaFutures, Department of Information Science and Media Studies, University of Bergen
Research Fellow
Interdisciplinary ICT Research Center for Cyber and Real Spaces, Research Institute of Electrical Communication,
Tohoku University
Research Topic: Human-Computer Interaction, Robotic display, Shape-changing interface, Spatial interface
Contact: Email: yuki.onishi[at]uib.no
yukionishi87[at]gmail.com
Twitter / LinkedIn / Google Scholar / MediaFutures / ICD-Lab
Resume: CV (updated at 2023.05)
2025.01-Present Researcher at MediaFutures, Department of Information Science and Media Studies, University of Bergen, Norway
2023.06-Present Research Fellow at Research Institute of Electrical Communication, Tohoku University
2023.05-2024.05 Postdoctoral Fellow at School of Computing and Information Systems, Singapore Management Univerisity
2023.04-2023.05 Project Researcher at Research Institute of Electrical Communication, Tohoku University
2020.04-2023.03 Ph.D. candidate in Information Sciences at Graduate School of Information Sciences, Tohoku University
2018.04-2020.03 MSc at Graduate School of Information Sciences, Tohoku University
2014.04-2018.03 Bachelor at Engineering, Tohoku University
2011.04-2014.03 Kichijo girl's high school
2020.04-2023.03 Japan Society for the Promotion of Science (JSPS), Research Fellowship (DC1)
2018.06-2023.03 Tohoku University Science Ambassador
2021.12 VRST'21 Student Volunteer Chair
2020.04-2021.09 Teaching assistant, The basic of information, Miyagi University of Education
2020.04-2020.09 Teaching assistant, The basic of computer science A, Tohoku University
2019.05-2019.05 ACM CHI 2019 Student Volunteer
2018.05-2018.05 Japan-Russia Student Forum, Japan Student Representative
2017.11-2017.11 ACM SIGGGRAPH Asia 2017 Student Volunteer
2017.10-2017.10 World Festival of Youth and Student, Japan Delegate
Generative AI tools can provide people with the ability to create virtual environments and scenes with natural language prompts. Yet, how people will formulate such prompts is unclear—particularly when they inhabit the environment that they are designing. For instance, it is likely that a person might say, “Put a chair here,” while pointing at a location. If such linguistic and embodied features are common to people’s prompts, we need to tune models to accommodate them. In this work, we present a Wizard of Oz elicitation study with 22 participants, where we studied people’s implicit expectations when verbally prompting such programming agents to create interactive VR scenes. Our findings show when people prompted the agent, they had several implicit expectations of these agents: (1) they should have an embodied knowledge of the environment; (2) they should understand embodied prompts by users; (3) they should recall previous states of the scene and the conversation, and that (4) they should have a commonsense understanding of objects in the scene. Further, we found that participants prompted differently when they were prompting in situ (i.e. within the VR environment) versus ex situ (i.e. viewing the VR environment from the outside). To explore how these lessons could be applied, we designed and built Ostaad, a conversational programming agent that allows non-programmers to design interactive VR experiences that they inhabit. Based on these explorations, we outline new opportunities and challenges for conversational programming agents that create VR environments.
DIS'24 Paper: [DOI]
Gaining extensive practice in anesthesia procedures is critical for medical students. However, the current high-fidelity simulators for clinical anesthesia practice limit such vital opportunities due to their professional-level functions, large size, and high installation costs. In this paper, we propose Anesth-on-the-Go, a portable game-based anesthesia simulator that allows medical students to repeatedly practice clinical anesthesia procedures anywhere, anytime on a conventional personal computer. The proposed simulator is designed for medical students, and it allows them to manipulate controls appropriately for various anesthesia procedures according to pre-determined surgical scenarios. Based on iterative interviews with clinical anesthesia instructors, we designed a simulation interface with four key components: 1) a monitor, 2) a decision-making board, 3) a surgical field view, and 4) a communication view. As our initial evaluation, we introduced the simulator in the training curriculum for medical students and verified its effect on their proficiency level in subsequent training with high-fidelity simulation. The results show that individual and prior training with our simulator improved each student’s overall performance (i.e., score) and stimulated discussion among teammates during the subsequent formal training. The evaluation’s findings also indicate that the simulator experience can facilitate post-training reflection.
CHI'24 Late Breaking Work: [DOI] [Video Preview] [Presentation]
Room-scale VR has been considered an alternative to physical office workspaces. For office activities, users frequently require planar input methods, such as typing or handwriting, to quickly record annotations to virtual content. However, current off-the-shelf VR HMD setups rely on mid-air interactions, which can cause arm fatigue and decrease input accuracy. To address this issue, we propose UbiSurface, a robotic touch surface that can automatically reposition itself to physically present a virtual planar input surface (VR whiteboard, VR canvas, etc.) to users and to permit them to achieve accurate and fatigue-less input while walking around a virtual room. We design and implement a prototype of UbiSurface that can dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles around a virtual room. We then conduct studies to validate its technical performance and examine how UbiSurface facilitates the user's primary mid-air planar interactions, such as painting and writing in a room-scale VR setup. Our results indicate that this system reduces arm fatigue and increases input accuracy, especially for writing tasks. We then discuss the potential benefits and challenges of robotic touch devices for future room-scale VR setups.
We propose WaddleWalls, a room-scale interactive partitioning system using a swarm of robotic partitions that allows occupants to interactively reconfigure workspace partitions to satisfy their privacy and interaction needs. The system can automatically arrange the partitions' layout designed by the user on demand. The user specifies the target partition's position, orientation, and height using the controller's 3D manipulations. In this work, we discuss the design considerations of the interactive partition system and implement WaddleWalls' proof-of-concept prototype assembled with off-the-shelf materials. We demonstrate the functionalities of WaddleWalls through several application scenarios in an open-planned office environment. Through initial user evaluation and interview with experts, we clarify the feasibility, potential, and future challenges of WaddleWalls.
UIST'22 Technical Paper: [DOI] [Video] [Video Preview]
We propose a self-actuated stretchable partition whose physical height, width, and position can dynamically change to create secure workplaces (e.g., against privacy and infectious risks) without inhibiting group collaboration. To support secure workplace layouts and space reconfigurations, the partitions’ height, length, and position are adapted automatically and dynamically. We implement a proof-of concept prototype with a height-adjustable stand, a roll-up screen, and a mobile robot. We then show some example application scenarios to discuss potential future actuated-territorial offices.
CHI'21 Late Breaking Work: [DOI] [Video] [Video Preview] [Presentation]
We explore BouncyScreen, an actuated 1D display system that enriches indirect interaction with a virtual object by pseudo-haptic feedback mechanics enhanced through the screen's physical movements. When the user manipulates a virtual object using virtual reality (VR) controllers, the screen moves in accordance with the virtual object. We configured a proof-of-concept prototype of BouncyScreen with a flat-screen mounted on a mobile robot. The psychophysical study confirmed that the screen's synchronous physical motions significantly enhance the reality of the interaction and the sense of presence.
IEEE VR'21 Conference Paper: [DOI] [Video] [Video Preview] [Presentation]
第24回 日本VR学会大会'19: [PDF]
Living Wall Display displays interactive content on a mobile wall screen that moves in concern with content animation. In addition to the audio-visual information, the display dynamically changes its position and orientation, responding to the content animation triggered by user interactions for augmentaion of the intaraction experience. We implement three proof of concept prototypes (FPS Shooting Game, Drive Simulation, Baseball Pitching Simulation) that represent pseudo force impact of the interactive content using physical screen movement.
We present “Be Bait!”, a unique virtual fishing experience that allows the user to become a bait by lying on the hammocks, instead of holding a fishing rod. We implement the hammock-based locomotion method with haptic feedback mechanisms. In our demonstration, users can enjoy exploring a virtual underwater world and fighting with fishes in direct and intuitive ways.
Tianyi Zhang, Colin Au Yeung, Emily Aurelia, Yuki Onishi, Neil Chulpongsatorn, Jiannan Li, and Anthony Tang. 2025. Prompting an Embodied AI Agent: How Embodiment and Multimodal Signaling Affects Prompting Behaviour. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '25). (to be appeared)
Setareh Aghel Manesh, Tianyi Zhang, Yuki Onishi, Kotaro Hara, Scott Bateman, Jiannan Li, and Anthony Tang. 2024. How People Prompt Generative AI to Create Interactive VR Scenes. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS'24), pp. 2319-2340, 22 pages.
Ryota Gomi, Kazuki Takashima, Yuki Onishi, Kazuyuki Fujita, and Yoshifumi Kitamura. 2023. UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR. Proc. ACM Hum.-Comput. Interact. 7, ISS, Article 443 (December 2023), 22 pages.
Yuki Onishi, Kazuki Takashima, Shoi Higashiyama, Kazuyuki Fujita, and Yoshifumi Kitamura. WaddleWalls: Room-scale Interactive Partitioning System using a Swarm of Robotic Partitions. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (UIST '22). ACM, Article 29, 1–15. (Acceptance rate 26.3%)
Yuki Onishi, Kazuki Takashima, Kazuyuki Fujita, Yoshifumi Kitamura, BouncyScreen: Physical Enhancement of Pseudo-Force Feedback, Proceedings of 2021 IEEE Virtual Reality and 3D User Interfaces (IEEE VR'21), Online, March 2021, pp. 363-372. (Acceptance rate 23.9%)
Yuki Onishi, Anthony Tang, Yoshiki Kudo, Kazuki Takashima, Yoshifumi Kitamura, Exploring Living Wall Display that Physically Augments Interactive Content, Transactions of the Virtual Reality Society of Japan, Vol.24, No.3, pp. 197-207, September 2019
Yuki Onishi, Eiko Onishi, Kazuki Takashima, and Yoshifumi Kitamura. Anesth-on-the-Go: Designing Portable Game-based Anesthetic Simulator for Education. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), May 2024, Honolulu, USA. 7 pages.
Yuki Onishi, Kazuki Takashima, Kazuyuki Fujita, Yoshifumi Kitamura, Self-actuated Stretchable Partitions for Dynamically Creating Secure Workspaces, In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA '21), Article 294, 1-6, Online, May 2021. (Acceptance rate 39.0%)
Shotaro Ichikawa, Yuki Onishi, Daigo Hayashi, Akiyuki Ebi, Isamu Endo, Aoi Suzuki, Anri Niwano, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura, Be Bait!: Hammock-Based Interaction for Enjoyable Underwater Swimming in VR, Asian CHI Symposium: Emerging HCI Research, Glasgow, UK, May 2019
Yuki Onishi, Anthony Tang, Yoshiki Kudo, Kazuki Takashima, Yoshifumi Kitamura, Perception of Spatial Information of Animated Content on Physically Moving Display, Asian CHI Symposium: Emerging HCI Research, Glasgow, UK, May 2019
Shotaro Ichikawa, Yuki Onishi, Daigo Hayashi, Akiyuki Ebi, Isamu Endo, Aoi Suzuki, Anri Niwano, Yoshifumi Kitamura, Be Bait!: A Unique Experience with Hammock-based Underwater Locomotion Method, Proceedings of 2019 IEEE Virtual Reality and 3D User Interfaces, Osaka, Japan, March 2019, pp. 1315-1316 (Invited Demonstration)
Yuki Onishi, Yoshiki Kudo, Kazuki Takashima, Yoshifumi Kitamura, The Living Wall Display: Physical Augmentation of Interactive Content Using an Autonomous Mobile Display, Proceedings of ACM SIGGRAPH Asia 2018 Emerging Technologies, Article 15, 1-2, Tokyo, Japan, December 2018 (Acceptance rate 18.4%)
"You'll Never Escape Your Cubicle Now That The Walls Can Follow You" GIZMODO, 2022.11
「勝手に動き回って伸縮するパーティション登場 ソーシャルディスタンスとプライバシー確保 東北大学が開発」ITmedia NEWS, 2021.8
「東北大など、仮想内のアニメーションと同期してダイナミックに回転・前後移動をする自走型ディスプレイを発表」
Seamless, 2018.11
SPAJAM 2019 仙台予選 最優秀賞,作品名:タスワケ,2019年6月
DATEAPPS 2019 IT部門 最優秀賞 ,作品名:GLIP,2019年2月 [Link]
第24回 国際学生対抗バーチャルリアリティコンテスト (IVRC2018) 決勝大会,ドスパラ賞,Unity賞,作品名:Be Bait! 〜求めよ,さらば食べられん〜,2018年11月.[Link]
第24回 国際学生対抗バーチャルリアリティコンテスト (IVRC2018) 仙台予選大会 一般学生部門,第5位,作品名:Be Bait! 〜求めよ,さらば食べられん〜,2018年9月.
DATEAPPS 2018 IT部門 最優秀賞 ,作品名:PICK,2018年2月 [Link]
大西悠貴,工藤義礎,高嶋和毅,北村喜文,自走式ディスプレイの並進と回転を用いたコンテンツ表現の拡張,エンターテインメントコンピューティングシンポジウム2017, P383-384, 仙台, 2017 年9月 ベストデモ賞(一般投票部門)
東北大学 グローバルリーダー育成プログラム グローバルリーダー認定,2017年8月
All Rights Reserved. Yuki Onishi, 2020.