Research

Collision avoid in immersive VR (2016-)

Video see-through type Head Mounted Display (HMD) with providing a high immersive feeling in the virtual space make interaction between the user and the real space difficult due to covering the entire field of view of the user. We propose two methods to support interaction with real world while playing immersive VR game, even in walking and without reducing the immersive feeling as much as possible. The first method is to superimpose 3D point cloud of real space within a certain distance from the user on the virtual space in HMD. The second method is to deploy familiar object such as known furniture in his/her room. A user trace the familiar objects as subgoals, and then the user can reach to goal. As a result of the usability test, the evaluation method of deploying the subgoals in the virtual space tend to provide better spatial awearness of real space without reducing immersive feeling to VR.

Diminished Robot affordance (2016-)

Human Robot Collaboration (HRC) that a robot and a human conduct some work together have been attracting in field of Human Robot interaction. In such HRC field, several safety measures are secured because Human and robot conduct collaborative works in same workspace with close range situation. In this research, we consider that human's help can be extracted by diminishing robot affordance intendedly. Therefore, we depress appearances and movements of robot with using AR technique.

  • Nobuchika Sakata, Masahiro Tada and Kensuke Harada:"Removing fear of robot movement with AR awareness", The First International Workshop on Mixed and Augmented Reality Innovations (MARI) 2016, Tasmania in Australia, November 2016.

Floor Interaction(2011-2017)

Wearable projection systems enable hands-free viewing via large projected screens, and eliminating the need to hold devices. These systems need a surface of projection and interaction. However, in case of choosing a wall as the projection surface, users have to be positioned in front of the wall when they wish to access information. In this project, we propose a wearable input/output system composed of a mobile projector, a depth sensor and a gyro sensor. It allows user to conduct "select" and "drag" operation by footing and fingertips controlling in the projected image on floor. Also it can provide user more efficient GUI operations on floor with combining hand and toe input. Also, we suggest guidelines and identify some problems when designing interface for the interaction between the hand/toe and the floor. We also suggest the input method which uses both a finger and a toe.

Estimating head/body posture from wearable sensors(2015-)

We propose a method of estimating head posture with one depth sensor dangled from neck. The sensor located in front of chest measures point cloud of a lower chin, and then a center of gravity extracted from the measured point cloud. Using regression analysis between center of gravity position of the chin and head posture, a head posture can be estimated with an accuracy of 5 degrees.

Adding Search Queries to Picture Lifelogs for Memory Retrieval(2015-2017)

A picture lifelog is a type of lifelog that consists of pictures, mainly taken by the user. Recently, users have been able to easily create picture lifelogs because many portable devices such as smart phones have a camera. When a user sees a picture in their picture lifelog, it is sometimes difficult to recall the events related to the picture. Therefore, we proposed to combine search queries on a picture lifelog in order to support memory retrieval. Those search queries imply what the user was thinking at the time. We investigated whether search queries enable a user to recall their thoughts regarding picture lifelogs. As a result, we reveal that displaying a picture with search queries performed around the time it was taken tends to improve users' memories better than its time, location, or emails sent during that time.



Sharing Search Query among small community(2014-2015)

Queries entered into search engines, such as Google, are used for a variety of purposes, such as satisfying user interests and finding some solutions. Some researchers focus on the search queries in order to extend the user interaction and information support. This is because the search queries include user intentions and circumstances. We assume that sharing the search queries can be applied not only to the online virtual world, but also to the real world; in particular, we focus on applying these in a small community. The opportunity of conversation is increased by sharing search queries, which show the user's interests and intentions in a closed community.


Collab hand(2009-2012)

This study focuses on remote collaboration in which a local worker works with real objects using a remote instructor. The goal of this study is to achieve an interaction that allows a remote instructor to provide a local worker with clear and accurate instructions. In particular, the ProCam system, consisting of a camera and a projector, is placed at the work location and the tabletop system, consisting of a display, a depth sensor, and a camera, is placed at a remote instructor's location. At the remote instructor's location, the image captured by the ProCam system at the work location is displayed on the tabletop display at the remote location. Next, the instructor's arm is extracted from the image of the instruction location by the depth sensor(Kinect). Then, overlapping the image with the work environment enables communication along with information of the embodiment, which consists of moving arms and pointing gestures. Furthermore, we propse that demagnified the instructor's arm image is projected on worker's side. It can allow to conduct detail working.

Clipping Light(2011-2013)

We present a novel method to take photos with a hand-held camera. Cameras are being used for new purposes in our daily lives these days, such as to augment human memory or scan visual markers (e.g. QR-codes) and opportunities to take snapshots are increasing. However, taking snapshots with today's hand-held camera is troublesome, because its viewfinder forces the user to see the real space through itself, and it requires complicated operation to control zoom levels and press a shutterrelease button at the same time. Therefore, we propose ClippingLight that is a combination method of Projection Viewfinder and tilt-based zoom control. It enables to take snapshots with low effort. We implement this method using a prototype of real-world projection camera. We conducted user study to confirm the effect of CippingLight in situations to take photos one after another. As a result, we found that ClippingLight is more comfortable and requires lower effort than today’s typical camera when a user takes a photo quickly.


Combining Interface immersive HMD and touch-pad(2013-2014)

Existed systems to operate AR objects are allowed to conduct direct and intuitive manipulation. For example, the extended method of arm, it is possible to operate AR objects positioned out of reach. However, several problems such as reducing the flexibility and accuracy are caused in the case of AR object positioned faraway place from operator. Therefore, we propose the relative input method can operate an AR object on a touchpad with looking at the real space in which AR objects exist. In the case of treating the input amount as the absolute amount, it is hard to operate with looking at the real space in which AR objects exist. In this proposed interface, we assume that input amount should be treated as relative basically. However, in some cases, the input amount in touchpad should be regarded absolute instead of relative. At the same time with overlaying the miniatures of the AR objects on the touchpad and manipulating the AR miniatures on hand, it brings the capability to operate AR object precisely from a faraway place in our proposed interface. Therefore, we study the relationship with user's point of view (PoV) and absolute/relative input when the user is wearing HMD with handling touchpad in an AR environment. As a result, we confirm that usability can be improved with changing user's PoV depend on the quality of real world 3D data and along to changing absolute or relative input depend on the PoV.


Automatic Termination and Route Guide for 3D Scanning Based on Area Limitation (2012-2014)

In case of using existed hand-held 3D scanning system, users have to estimate unmeasured spots to decide route for scanning and terminate scanning by watching a process of scanning iteratively. These many user operations impose burdens to users. In this paper, we propose a novel scanning system which provides route guidance by means of area limitation of scanning at the beginning.


Procams for Laptop(2009-2012)

A system comprising a projector and a camera is called a ProCams. Many researches related to ProCams have been conducted. However, most of them focused on the interaction between the user and the projected information onto a large wall or tabletop. In addition, the projectors used in those studies were also large and could not be embedded in other products. Recently, projectors have been miniaturized, and small ones enough to hold in one hand are released. Therefore, we have attached small projectors and cameras to a laptop to realize interactive projection surfaces, and called it laptop with ProCams.(2008-2011)

Visible Light Path Laser Projectory (2007-2010)

Under remote collaboration, a teleoperated laser pointer is applied as an instruction tool to point at real-world objects. However, this method has difficulty in identifying the object being pointed at when the laser spot is occluded behind objects. This paper proposes a method that visualizes the light path of a laser by jetting a mist along the light axis to mitigate this difficulty. A worker should be able to estimate the position of the object being pointed at by using the visible light path laser pointer. Experiments were conducted to evaluate the proposed method in a face-to-face situation. The results showed that the estimation accuracy for the position of a laser spot occluded behind an object was improved when the light path of the laser was visualized.

Situated Music (2005-2006)

We define Situated Music as a framework that selects and plays a tune according to a situation. The selected and played tune itself is also referred to as Situated Music. In this paper, we describe Interactive Jogging, that is an application of Situated Music for jogging. Measuring pitch acceleration with an accelerometer attached on a headphone, we estimate a mileage of the jogging. And then we provide certain music, which has certain tempo based on the measured pitch and mileage, to the jogger. Due to this, Interactive Jogging may keep the jogger a runner motivated and augment an amusement aspect of jogging.

Wearable Active Camera with Laser pointer (2003-2007)

The Wearable Active Camera/Laser (WACL) allows the remote collaborators not only to independently set their viewpoints into the wearer’s workplace but also to point to real objects directly with the laser spot. Via this research project, we report an user test to examine the advantages and limitations of the WACL interface in remote collaboration by comparing a head-mounted display and a head-mounted camera based headset interface. Results show that the WACL is more comfortable to wear, is more eye-friendly, and causes less fatigue to the wearer, although there is no significant difference in task completion time.