Research
Collision avoid in immersive VR (2016-)
Video see-through type Head Mounted Display (HMD) with providing a high immersive feeling in the virtual space make interaction between the user and the real space difficult due to covering the entire field of view of the user. We propose two methods to support interaction with real world while playing immersive VR game, even in walking and without reducing the immersive feeling as much as possible. The first method is to superimpose 3D point cloud of real space within a certain distance from the user on the virtual space in HMD. The second method is to deploy familiar object such as known furniture in his/her room. A user trace the familiar objects as subgoals, and then the user can reach to goal. As a result of the usability test, the evaluation method of deploying the subgoals in the virtual space tend to provide better spatial awearness of real space without reducing immersive feeling to VR.
- Kohei Kanamori, Nobuchika Sakata, Tomu Tominaga, Yoshinori Hijikata, Kensuke Harada, and Kiyoshi Kiyokawa:"Obstacle avoidance method in real space for virtual reality immersion", In Proceedings of the IEEE International Symposium for Mixed and Augmented Reality 2018, pp.80-89 2018.
- Kohei Kanamori, Nobuchika Sakata, Tomu Tominaga, Yoshinori Hijikata, Kiyoshi Kiyokawa, and Kensuke Harada:"Walking support in real space using social force model when wearing immersive hmd", In Adjunct Proceedings of the IEEE International Symposium for Mixed and Augmented Reality 2018, pp.250-253 2018.
- Kohei Kanamori, Nobuchika Sakata, Tomu Tominaga, Yoshinori Hijikata, Kensuke Harada:"Study of walking support method in real space when immersed in virtual space", In proc of Asia Pacific workshop of Mixed and Augmented Reality (APMAR2017), 2017
Diminished Robot affordance (2016-)
Human Robot Collaboration (HRC) that a robot and a human conduct some work together have been attracting in field of Human Robot interaction. In such HRC field, several safety measures are secured because Human and robot conduct collaborative works in same workspace with close range situation. In this research, we consider that human's help can be extracted by diminishing robot affordance intendedly. Therefore, we depress appearances and movements of robot with using AR technique.
- Nobuchika Sakata, Masahiro Tada and Kensuke Harada:"Removing fear of robot movement with AR awareness", The First International Workshop on Mixed and Augmented Reality Innovations (MARI) 2016, Tasmania in Australia, November 2016.
Noise Canceling HMD(2015-)
This project aim to develop “visual” noise-canceling to reduce discomfort. As one of realizing the goal, we proposed a concept that changes the size of a person in a video see-though system to manipulate interpersonal distance.
- Masaki Maeda, Tomu Tominaga, Yoshinori Hijikata and Nobuchika Sakata:” Controlling Virtual Body Size to Reduce Discomfort Caused by Inappropriate Interpersonal Distance”, In Proc of International Symposium on Wearable Computers 2016 (ISWC2016), pp.192-195, Heidelberg in Geramn, September 2016.
- 前田将希,酒田信親:"仮想身体サイズによる対人距離の視覚的拡張の基礎的検討", インタラクション2016, pp.47-53, 東京(2016年3月)
- Nobuchika Sakata, Masaki Maeda, Tomu Tominaga, and Yoshinori Hijikata:"Controlling the interpersonal distance using the virtual body size", Transactions of the Virtual Reality Society of Japan Vol.22, No.2, pp.209-216, 2017.
Floor Interaction(2011-2017)
Wearable projection systems enable hands-free viewing via large projected screens, and eliminating the need to hold devices. These systems need a surface of projection and interaction. However, in case of choosing a wall as the projection surface, users have to be positioned in front of the wall when they wish to access information. In this project, we propose a wearable input/output system composed of a mobile projector, a depth sensor and a gyro sensor. It allows user to conduct "select" and "drag" operation by footing and fingertips controlling in the projected image on floor. Also it can provide user more efficient GUI operations on floor with combining hand and toe input. Also, we suggest guidelines and identify some problems when designing interface for the interaction between the hand/toe and the floor. We also suggest the input method which uses both a finger and a toe.
- Nobuchika Sakata, Fumihiro Sato, Tomu Tominaga, Yoshinori Hijikata:” Extension of Smartphone by Wearable Input/output Interface with Floor Projection”, In Proc of Collabtech2017, pp.168-181, Saskatoon in Canada, August 2017.
- Daiki Matsuda, Keiji Uemura, Nobuchika Sakata, and Shogo Nishida:"Toe Input using a Mobile Projector and Kinect Sensor", International Symposium on Wearable Computers (ISWC2012), pp.48-51, Newcastle, UK, June 2012.(accpetance rate:18%)
- 酒田信親, 佐藤文宏, 冨永登夢, 土方嘉徳:"ウェアラブル手足入力インタフェースの有用性の調査" , システム制御情報学会論文誌, 採録決定
Estimating head/body posture from wearable sensors(2015-)
We propose a method of estimating head posture with one depth sensor dangled from neck. The sensor located in front of chest measures point cloud of a lower chin, and then a center of gravity extracted from the measured point cloud. Using regression analysis between center of gravity position of the chin and head posture, a head posture can be estimated with an accuracy of 5 degrees.
Adding Search Queries to Picture Lifelogs for Memory Retrieval(2015-2017)
A picture lifelog is a type of lifelog that consists of pictures, mainly taken by the user. Recently, users have been able to easily create picture lifelogs because many portable devices such as smart phones have a camera. When a user sees a picture in their picture lifelog, it is sometimes difficult to recall the events related to the picture. Therefore, we proposed to combine search queries on a picture lifelog in order to support memory retrieval. Those search queries imply what the user was thinking at the time. We investigated whether search queries enable a user to recall their thoughts regarding picture lifelogs. As a result, we reveal that displaying a picture with search queries performed around the time it was taken tends to improve users' memories better than its time, location, or emails sent during that time.
- Akira Kubota, Tomu Tominaga, Yoshinori Hijikata and Nobuchika Sakata:”Adding Search Queries to Picture Lifelogs for Memory Retrieval”, In Proc of the 2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI'16), pp.689-694, October 2016.
- 酒田信親, 久保田彰, 冨永登夢, 土方嘉徳:"写真ライフログにおける検索クエリを用いた想起補助",インタラクション2017, pp.10-16 (2017年3月)
- 酒田信親, 久保田彰, 土方嘉徳:"写真ライフログにおける検索クエリを用いた想起補助", 情報処理学会論文誌, Vol.??, No.?, pp.??-??, 採録決定
Sharing Search Query among small community(2014-2015)
Queries entered into search engines, such as Google, are used for a variety of purposes, such as satisfying user interests and finding some solutions. Some researchers focus on the search queries in order to extend the user interaction and information support. This is because the search queries include user intentions and circumstances. We assume that sharing the search queries can be applied not only to the online virtual world, but also to the real world; in particular, we focus on applying these in a small community. The opportunity of conversation is increased by sharing search queries, which show the user's interests and intentions in a closed community.
Collab hand(2009-2012)
This study focuses on remote collaboration in which a local worker works with real objects using a remote instructor. The goal of this study is to achieve an interaction that allows a remote instructor to provide a local worker with clear and accurate instructions. In particular, the ProCam system, consisting of a camera and a projector, is placed at the work location and the tabletop system, consisting of a display, a depth sensor, and a camera, is placed at a remote instructor's location. At the remote instructor's location, the image captured by the ProCam system at the work location is displayed on the tabletop display at the remote location. Next, the instructor's arm is extracted from the image of the instruction location by the depth sensor(Kinect). Then, overlapping the image with the work environment enables communication along with information of the embodiment, which consists of moving arms and pointing gestures. Furthermore, we propse that demagnified the instructor's arm image is projected on worker's side. It can allow to conduct detail working.
Clipping Light(2011-2013)
We present a novel method to take photos with a hand-held camera. Cameras are being used for new purposes in our daily lives these days, such as to augment human memory or scan visual markers (e.g. QR-codes) and opportunities to take snapshots are increasing. However, taking snapshots with today's hand-held camera is troublesome, because its viewfinder forces the user to see the real space through itself, and it requires complicated operation to control zoom levels and press a shutterrelease button at the same time. Therefore, we propose ClippingLight that is a combination method of Projection Viewfinder and tilt-based zoom control. It enables to take snapshots with low effort. We implement this method using a prototype of real-world projection camera. We conducted user study to confirm the effect of CippingLight in situations to take photos one after another. As a result, we found that ClippingLight is more comfortable and requires lower effort than today’s typical camera when a user takes a photo quickly.
- 上羽 優希, 梶原 康宏, 酒田 信親, 西田 正吾:"ClippingLight: 投影式ビューファインダとひねりズームによる手軽で素早い撮影手法",情報処理学会論文誌, Vol.54, No.4, pp. 1489-1497(2013)
- 上羽優貴, 梶原康弘, 上村 敬志,酒田信親, 西田 正吾:"ClippingLight: 投影式ビューファインダとひねりズームを組み合わせた手軽で素早い撮影手法", インタラクション2012, pp.81-88 東京
- Yasuhiro Kajiwara, Keisuke Tajimi, Keiji Uemura, Nobuchika Sakata and Shogo Nishida:"ClippingLight: A Method for Easy Snapshots with Projection Viewfinder and Tilt-Based Zoom Control", In Proc of Augmented Human , Tokyo in Japan, March 2011.
Combining Interface immersive HMD and touch-pad(2013-2014)
Existed systems to operate AR objects are allowed to conduct direct and intuitive manipulation. For example, the extended method of arm, it is possible to operate AR objects positioned out of reach. However, several problems such as reducing the flexibility and accuracy are caused in the case of AR object positioned faraway place from operator. Therefore, we propose the relative input method can operate an AR object on a touchpad with looking at the real space in which AR objects exist. In the case of treating the input amount as the absolute amount, it is hard to operate with looking at the real space in which AR objects exist. In this proposed interface, we assume that input amount should be treated as relative basically. However, in some cases, the input amount in touchpad should be regarded absolute instead of relative. At the same time with overlaying the miniatures of the AR objects on the touchpad and manipulating the AR miniatures on hand, it brings the capability to operate AR object precisely from a faraway place in our proposed interface. Therefore, we study the relationship with user's point of view (PoV) and absolute/relative input when the user is wearing HMD with handling touchpad in an AR environment. As a result, we confirm that usability can be improved with changing user's PoV depend on the quality of real world 3D data and along to changing absolute or relative input depend on the PoV.
- Ryohei Nagashima, Nobuchika Sakata and Shogo Nishida, "Study of the touchpad interface to manipulate AR objects", In Proc. of ICAT2013, pp.78-84, Tokyo in Japan, December 2013.
- Nobuchika Sakata, Ryohei Nagashima and Shogo Nishida, "A Tablet Interface for Laying Out AR Objects -Outlook of Relationship with Smartphone and AR-", In Proc. of IDW2013, Sapporo in Japan, December 2013.
- 永嶋涼平, 酒田 信親, 西田 正吾:"ARオブジェクト操作用タッチパッドインタフェースにおける入力手法の検討 ",日本VR学会論文誌, Vol.19, No.2, pp. 265-274(2014)
- 酒田信親,永嶋涼平,西田正吾, 月刊ディスプレイ2012年12月号、 "タッチパネルを用いたAR 空間での遠距離操作 ―スマートフォンとAR の未来―" pp.58-64(ISSN1341-3961)
Automatic Termination and Route Guide for 3D Scanning Based on Area Limitation (2012-2014)
In case of using existed hand-held 3D scanning system, users have to estimate unmeasured spots to decide route for scanning and terminate scanning by watching a process of scanning iteratively. These many user operations impose burdens to users. In this paper, we propose a novel scanning system which provides route guidance by means of area limitation of scanning at the beginning.
- 上羽優希, 酒田 信親, 西田 正吾:"効率のよい計測のためのナビゲーションを実現するタイトなボクセル領域設定に基づく3Dインタラクティブスキャニング",日本VR学会論文誌, Vol.19, No.3, pp. 339-347(2014)
- 上羽優貴, 上村 敬志,酒田信親, 西田 正吾:"領域制限に基づく3次元インタラクティブモデリング", インタラクション2014, pp.84-91 東京(2014年3月)
- Yuuki Ueba, Nobuchika Sakata, Shogo Nishida:"Automatic Termination and Route Guide for 3D Scanning Based on Area Limitation", In Proc of 3DUI2014, pp.171-172, Minneapolis in USA, March 2014.
Hip Mounted Projector(2009-2011)
This research aims to stabilize mobile projecting on the floor, even during walking and running. Specifically, we adopted a method of estimating the displacement of the waist with only the acceleration sensor/gyro sensor built in the mobile projector.
- Keisuke Tajimi, Keiji Uemura, Nobuchika Sakata and Shogo Nishida:"Stabilization Method for Floor Projection with a Hip-Mounted Projector", In Proc of ICAT2010, pp.77-83, Aadelade in Australia, December 2010
- 多治見 圭亮, 上村 敬志, 梶原 康宏, 酒田 信親, 西田 正吾:"歩行の周期性を利用した腰部搭載プロジェクタによるフロアプロジェクション安定化手法",日本VR学会論文誌, Vol.16, No.2, pp.161-169 (2011)
Procams for Laptop(2009-2012)
A system comprising a projector and a camera is called a ProCams. Many researches related to ProCams have been conducted. However, most of them focused on the interaction between the user and the projected information onto a large wall or tabletop. In addition, the projectors used in those studies were also large and could not be embedded in other products. Recently, projectors have been miniaturized, and small ones enough to hold in one hand are released. Therefore, we have attached small projectors and cameras to a laptop to realize interactive projection surfaces, and called it laptop with ProCams.(2008-2011)
- Takahiko Suzuki, Nobuchika Sakata, Daiki Matsuda and Shogo Nishida:"A Study of Laptop with Projector Camera System for Collaboration", The Sixth International Conference on Collaboration Technologies (Collabtech2012), pp.139-144, Sapporo, Japan August 2012.
- Takahiko Suzuki, Nobuchika Sakata, Shogo Nishida:"ProCams for Laptop --- New human-computer interaction design comprising projector and camera for use with laptops", ASIAGRAPh2009, pp. 96-99, in Tokyo, October 2009.
Visible Light Path Laser Projectory (2007-2010)
Under remote collaboration, a teleoperated laser pointer is applied as an instruction tool to point at real-world objects. However, this method has difficulty in identifying the object being pointed at when the laser spot is occluded behind objects. This paper proposes a method that visualizes the light path of a laser by jetting a mist along the light axis to mitigate this difficulty. A worker should be able to estimate the position of the object being pointed at by using the visible light path laser pointer. Experiments were conducted to evaluate the proposed method in a face-to-face situation. The results showed that the estimation accuracy for the position of a laser spot occluded behind an object was improved when the light path of the laser was visualized.
- Shu Okamoto, Nobuchika Sakata and Shogo Nishida: Visible Light PathLaser Pointer, In Proc. The Fifth International Conference on Collaboration Technologies 2009, pp.8-13, In Sydney Austrarlia (2009),
- 岡本周, 酒田信親, 西田正吾:"光路を可視化したレーザポインタによる実物体指示",日本VR学会論文誌, Vol.13, No.4, pp.421-428 (2008)
Situated Music (2005-2006)
We define Situated Music as a framework that selects and plays a tune according to a situation. The selected and played tune itself is also referred to as Situated Music. In this paper, we describe Interactive Jogging, that is an application of Situated Music for jogging. Measuring pitch acceleration with an accelerometer attached on a headphone, we estimate a mileage of the jogging. And then we provide certain music, which has certain tempo based on the measured pitch and mileage, to the jogger. Due to this, Interactive Jogging may keep the jogger a runner motivated and augment an amusement aspect of jogging.
- 酒田信親, 興梠正克, 大隈隆史, & 蔵田武志. (2005). Situated Music: インタラクティブジョギングへの応用. ヒューマンインタフェースシンポジウム, 2005, 459-462.
- Nobuchika Sakata, Takeshi Kurata, Masakatsu Kourogi, and Hideaki Kuzuoka:"Situated Music: An application to interactive jogging", In Proc. ISWC Student Colloquium 2006.
Wearable Active Camera with Laser pointer (2003-2007)
The Wearable Active Camera/Laser (WACL) allows the remote collaborators not only to independently set their viewpoints into the wearer’s workplace but also to point to real objects directly with the laser spot. Via this research project, we report an user test to examine the advantages and limitations of the WACL interface in remote collaboration by comparing a head-mounted display and a head-mounted camera based headset interface. Results show that the WACL is more comfortable to wear, is more eye-friendly, and causes less fatigue to the wearer, although there is no significant difference in task completion time.
- Nobuchika Sakata Takeshi Kurata Takekazu Kato Masakatsu Kourogi Hideaki Kuzuoka:"WACL: Supporting Telecommunications Using Wearable Active Camera with Laser Pointer", in Proc. of ISWC2003, NY, USA, pp.53-56, 2003.
- Takeshi Kurata, Nobuchika Sakata, Masakatsu Kourogi, Hideki Kuzuoka, Mark Billinghurst: "Remote Collaboration using a Shoulder-Worn Active Camera/Laser" , In Proc. 8th IEEE International Symposium on Wearable Computers ISWC2004) in Arlington, VA, USA, pp.62-69, 2004.