Research

Introduction

I had excellent opportunities to conduct a world-rank research on emerging fields. My research activities had provided me with interdisciplinary experience related to human interface, haptic, virtual reality, interactive media systems, human-computer interaction, computer graphics, laser projection, and robotics.

I worked in Japan Advanced Institute of Science and Technology from 2001 to 2003 in a research related to knowledge creation and VR technologies. I explored, implemented, and evaluated the utilizing of collaborative virtual environment, equipped with some learning functionality, in knowledge creation. From 2003 to 2006, I worked at the Virtual System Laboratory in Gifu University as a researcher where I had involved in a collaborative project with the industry where I had the chance to lead the software development for Haptic Interface Robot (HIRO). The work was recognized by NEDO (New Energy and Industrial Technology Development Organization) and exhibited at the World Expo 2005 and considered as one of the Next-Generation Robots and haptic interface. In Iwate University I had been involved in exploring the laser projection technologies and laser graphics. I proposed an innovative new method for drawing and optimizing graphics on laser display and obtained an international patent in this method. I am currently working on many projects related to VR, Gaming, and Interactive Media.

Research projects

Tactile Footwear Interface to Create the Illusion of Walking on Different Surfaces

  • To design and implement a haptic insole that deploys vibrators to simulate the texture of several surfaces such as: sand, sponge, wood, etc.
  • To study and optimize the design and layout of vibrators used to implement haptic insole.
  • To design and develop AR/VR environments that are integrated with the haptic insole for an entertainment application.

Self-Powered IoT-Enabled Water Monitoring System

While usable water on earth is sparse and costly to treat, statistics show excessive use worldwide. Penalties are issued to limit the excessive consumption; however, their impact is marginal as they do not identify the reason for the exorbitant consumption. Therefore, giving users the full awareness about their water consumption will greatly help in minimizing the water waste. This article aims to present a smart self-powered water monitoring system that leverages IoT and cloud computing. The system consists of an IoT device that can be installed at any water source, a cloud application to receive the data from the devices, and a mobile app to visualize the water consumption at every monitored source. This system gives the user the ability to identify the location and time of the excessive usage and leaks on the mobile phone.Experimental results confirm the self-powered premise of the solution. A prototype of mobile application and backend have been developed and tested.

Firefighting Simulation Training System using Virtual Reality

  • Produce a multimodal Virtual Reality system to train firefighters.
  • Integrate multiple senses (smell, touch) and perception methods (vision, hearing) to increase the realism of the system
  • Study the effect of involving these factors on increasing the realism of the system.
  • Study the behavior and the performances of the firefighter.

Haptic virtual fixtures for preoperative Tsurgical planning

Haptic virtual fixtures (VFs) are force feedback mechanisms that enhance the human performance of tele-op- erative procedures where visual guidance suffers from many limitations. Many papers have explored the in- tegration of VFs with tele-robotic procedures. However, only a few studies have included preoperative planning to their assembly. We created a novel VF design using procedures that require navigation along a path. Our design is based on the assembly of VF elements that fit along the path. To improve the design, we performed experiments to define the optimal properties in terms of highest accuracy and shortest task completion times for path-following procedures. The feasibility of the proposed method was tested on a preoperative simulated surgical planning task. Results demonstrate that the integration of the proposed VFs, which combine haptic force-field guidance and forbidden-region constraints with visual cues, increases accuracy and reduces the time taken to perform tasks.

Design of Immersive Virtual Reality System to Improve Communication Skills in Individuals with Autism

Individuals in the Autism Spectrum Disorder (ASD) regularly experience situations where they need to react to inquires that they do not know how to respond to, for example, questions related to dial-life activities that asked by strangers. Reach has been active recently in addressing how utilize technology to enhance social and communication impairments in children with autism. Immersive virtual reality is a relatively recent technology with a potential to bring an effective solution and to be used as a therapeutic tool to develop different skills. This paper presents a system to enhance the communications skills by utilizing the virtual reality technology. An interactive scenario-based system that uses role-play and turn-taking technique was developed to evaluate and verify the effectiveness of immersive environment on the social performance of an autistic child. The system utilized a speech recognition to provide natural interaction. Participants showed improved performance with immersive CAVE display more than HMD and normal desktop. The study also showed that immersive VR could be more satisfactory and motivational than desktop for children with ASD.

Enhanced Redirected Walking Algorithm

The redirected walking algorithm (RDW) is a locomotion technique that gives the user the perception of walking in larger or infinite virtual environment (VE) within small and limited tracking area. The available approaches which are derived from the original algorithm such as Steer-to-Orbit (S2O) and Steer-to-Center (S2C) proved its effectiveness for certain use cases. In addition, it requires a very large space. In this paper, we present a new algorithm that can combine the benefits of the both S2O and S2C techniques. The algorithm is based on Hypotrochoid equations which enabled us to generate many redirection paths with unique shapes. We present two new redirection paths with promising results.

Haptic Seat

Response Times for Auditory and Vibrotactile Directional Cues in Different Immersive Displays

Using multisensory signals in advanced driver assistance is continuously increasing as a way to increase the attention and reduce the reaction time. It is essential that driver assistance system is capable of providing directional cues to the driver to direct his attention to the sides of the car as usually the focus is in front of the car. These signals could be for blind spot information, navigation, lane departure warning, collision warning, etc. This study investigated the effect of auditory and vibrotactile on directional attention in driver assistance systems. Moreover, two types of immersive displays were used in the driving simulation, namely the Head Mounted Display (HMD) and CAVE display, to study the effect of the type of display on the human performance. Lane Chang Task was used to assess the attention by measuring the response time to directional information. Vibrotactile and Auditory cues induced equal response times, meanwhile, vibrotactile signal was significantly gained higher satisfaction than auditory.

Driver Activity Recognition in Virtual Reality Driving Simulation

In this research, we present the design of a driving seat with driver activities recognition based on a passive method for measuring body-postures by using two force sensor arrays to inspect the pressure patterns exhibited in driver’s seat and backrest. The seat is designed by testing the sensors distribution to find the most suitable distribution to make the recognition more accurate. A Virtual Reality (VR) driving simulation is developed to test the accuracy of recognition in immersive environment. Experiments carried out to test the posture recognition accuracy in both realistic setting and VR setting. The result showed that the system was able to recognize ten different postures with various accuracy ratio. The VR immersive driving environment enhanced the accuracy recognition in some postures.

A Study on the Design and Effectiveness of Tactile Feedback in Driving Simulator

Driving simulations are widely used to send navigational and warning information to help drivers navigate safely. The traditional approaches are to use visual and auditory channels which can cause sensory overloading. Haptic has major safety implication on reducing visual and auditory overloading in driving, and since seats are the largest area in touch with the driver’s body, it is a sensible choice to consider for delivering the haptic information. This paper presents the design and development of an optimal vibrotactile seat to provide a high level of satisfaction to the driver. The seat was designed by experimenting with different design parameters such as the intensity, position, and the rhythm of vibrations. Experiments were conducted to investigate the proper values of voltage, frequency, and amplitude that are specifically related to the developed haptic seat. A driving simulation was developed to evaluate the haptic seat for vehicle navigation in an immersive virtual driving simulator. Results showed that users preferred the vibrations to the audio feedback.

Optimum Design of Haptic Seat for Driving Simulator

Driving simulations are widely used to send navigational and warning information to help drivers navigate safely. Haptic-based information in vehicles has major safety implications on reducing visual and auditory overload in driving [Jeon et al. 2013]. Given that seats are interfaces that touch the largest area of the driver’s body, it is a sensible choice to consider as additional channel of information to the driver.

The aim of this work is to design and develop an optimal vibrotactile seat to provide a high level of satisfaction to the driver. Three main critical design parameters were considered: (1) proper intensity of vibration, (2) position of vibrators and minimum distance between distinguishable vibrations, (3) rhythm of vibration. Experiments were conducted to investigate the proper values of voltage, frequency, and amplitude that are specifically related to the developed haptic seat. A driving simulation was developed for Head-Mounted Display (HMD) to evaluate the haptic interface for the car’s seat in a realistic simulation

Incorporating Olfactory into a Multi-Modal Surgical Simulation

This paper proposes a novel multimodal interactive surgical simulator that incorporates haptic, olfactory, as well as traditional vision feedback. A scent diffuser was developed to produce odors when errors occur. Haptic device was used to provide the sense of touch to the user. The preliminary results show that adding smell as an aid to the simulation enhanced the memory retention that lead to better performance.

Interactive Digital Haptic Artifacts System

Combining the ability to feel the objects with three dimensional image and sound to improve the display can enrich people’s experience and enhance their perception. Haptic enables the feeling of the surface texture and give the feeling of what the object is made of. Therefore, our app, DōHA (Digital pOint clOud Haptic Artifact), provides a way to virtually interact with exhibits that would otherwise be off-limits through the use of Microsoft’s Kinect sensor, a computer, and a haptic device.

Mobile Augmented Reality for Pedestrian Navigation

This work presents a mobile augmented reality navigation aid for pedestrians for Qatar university (QU) outdoor campus buildings. The application aim is to replace the traditional navigation systems and the ancient maps and provides the user with new and intuitive approach for navigation and displaying information about the surrounding by overlaying these information and guidance for direction for any POI around the user using 2D and 3D graphics over the real live streaming of the mobile camera. The system uses the GPS to get signal for the points of interest (POIs) of the campus building, the system also consists of a database containing information about the buildings (e.g. Building history, building facilities). A graphics engine is utilized to overlay relevant real-time navigation information, such as the user’s current location and the time to destination. The system is demonstrated with a prototype application built using open source SDK on iPhone Platform.

3D Crowd Simulation for Emergency Evacuation

The main objective of this project is to create a crowd simulation that handles evacuation process. The simulation addresses the behaviors of crowd of agents in an emergency evacuation. We have used the crowd utility provided in Autodesk 3ds Max 2013 and MAXScript programming language to implement our crowd simulation. We believe that this project and its kind, is intended to be used by companies, schools, shopping malls, notional airports, and any other closed or open popular places here in Qatar or outside, to educate and aware people how to behave in serious situations.

Navigation Aid for Blind People Using Depth Information and Augmented Reality Technology

This project presents a new navigation aid hybrid system for blind people based on depth information, Augmented Reality (AR), and 3D sound. The system acquires depth data in real-time from Kinect depth sensor and detect the object, then generate 3D spatial sound to convey the direction and the distance of the nearest object. The RGB camera in the same Kinect sensor is used to recognize AR markers and provide adequate information about the surrounding environment using synthesized sound. The system architecture ensures satisfactory real-time performance, which contributed to the improvement achieved towards smooth and unhindered blind travel in indoor environment, and provided adequate help information about the surrounding environment as well.

Multidimensional Visual Aid Enhances Haptic Training Simulations

  • This research explores the use of hypotrochoidcurves as visual aids forhaptictraining systems that are used to teach motor skills that require recall of forces and positions at specific locations to replicate the expert touch.
  • The visual aid presented is a multidimensional feedback tool that can provide information on force, position, and velocity simultaneously.
  • The extent of learning is measured by the accuracy of the force recall.
  • Results suggest that multidimensional feedback is an effective way to promote learning of forces.

Laser Graphics

Efficient vector-oriented graphic drawing method for laser-scanned display

Laser projection, such as that used in laser light shows, usually uses computer-drawn graphics. However, some restrictions are imposed by laser-scanned displays on these vector-based drawings. This research presents a general vector-oriented drawing method to display computer vector graphics on different laser-scanned displays with optimization. The method considers how to solve the three fundamental problems inherent in laser projection: achieving constant beam intensity, blanking timing and corner overshooting. A velocity parameter has been used to achieve constant beam velocity for all strokes. Blanking timing which is related to turning on/off the laser beam has been examined and a simple test pattern is presented to estimate the total delay that exists in the laser-scanned display and compensate for it. In addition, an optimized approach to avoid corner overshooting and ringing is proposed: the dwelling time of the laser beam at a corner is set according to the corner angle. Moreover, a comparison between the proposed method and the point-oriented method in terms of speed has been presented.

Spiroraslaser: an interactive visual art system using

hybrid raster–laser projection

This paper explores the possibility of generating visual imagery in a highly interactive environment on the basis of hand position and acceleration. We present a new display method called hybrid raster–laser projection for combining two projections to create a final visual image. A Wii Remote Controller tracks hand orientation, acceleration and movement, and the graphics are generated by using hypotrochoids. The raster and the laser images are carefully synchronised in order to ensure homogenously integrated visuals. The resultant images are unique and beautiful because of the higher degree of contrast of the laser projection compared to the raster projection. This contrast creates lively, floating images that can be achieved only by using the proposed system. The proposed system presents a new interactive approach to the real- time creation of visual art and a novel method for displaying these visuals.

Laser VJ:Dream Machine (Planetarium VJ + Laser)

This small project presents a new way of projection and interactive visual art expression. three main components were mixed together and projected on the planetarium. First is the lively mixed images from VJ, and second is the laser graphics projected in concert with the VJ images and sycnronized to the BGM of the VJ. Lastly, the planetarium original images and graphics with edited BGM. Putting all these peices toghether created a new piece of art work that many did enjoyed.

Enhancing 3D Scenes Using a Laser Projector in Real-time

We present an innovative way to combine a laser (vector) projector with a video (raster) projector, using the former to enhance or augment the 3D images projected with the latter. As we will show, two main problems arise from this setting. One is to find a relation between the video projector space and the laser projector space, so that we can transform pixels into laser coordinates. The second problem is related to hiding virtually occluded laser segments to create a seamless fusion of the two projected parts. Also, we apply a calibration technique to compensate visual deformations caused by projector placement and/or non-planar screens.

A CORRECTION ALGORITHM FOR DISTORTED LASER IMAGES

In this paper, we demonstrate a simple, effective and robust method for accurately calibrating laser projectors.

The method relies on a digital camera placed at the location of the audience, and images are obtained in order to observe the deformations caused by the projection angle and surface deformations.

Once the deformations are detected, the necessary compensations are performed to obtain a correct projection of the desired imagery.

Furthermore, we extend this method to combine two or more projectors in order to create a single, overlapped projection area. We demonstrate that this method creates a collaborative effect between the projectors and allows the display of more complex images without increasing the scan rate.

Future Haptic Science Encyclopedia (WorldExpo 2005)

Future Haptic Science Encyclopedia (FHSE) was developed and presented as a new potential application that benefit greatly from haptic to achieve a new level of realism. The FHSE enables the users to experience many different virtual worlds like scientific, historical, and astronomical worlds.

HIRO II and FHSE were demonstrated at World Expo 2005 as part of the Prototype Robot Exhibition supported by New Energy and Industrial Technology Development Organization (NEDO). Thousands of people had the chance to watch the demonstration of FHSE. Many of them could even have the opportunity to try it as well.

The interesting feature of FHSE is that it can be considered as a complete VR interface where the user only uses HIRO II to interact and move between and around the scenes. Controls are introduced in this system, such as virtual buttons, levers, and arrows without the need to use other devices like joystick or keyboard.

Haptic Interaction Rendering Technique for HIRO

Hand interaction with virtual object requires a more elaborated model than the Haptic Interaction Point (HIP) due to the complex shape of the virtual hand.

A general haptic rendering of 3D object for hand interaction that takes in consideration the real shape and the orientation of the hand as well as accounts for the shape of the interacting objects.

Development of Five-Fingered Haptic Interface:HIRO-II

A new developed five-fingered haptic interface robot named HIRO II. The haptic interface can present force and tactile feeling at the five fingertips of the human hand. It is designed to be completely safe and similar to the human upper limb both in shape and motion ability. Its mechanism consists of a 6 DOF arm and a 15 DOF hand. The interface is placed opposite to the human hand, which brings safety and no oppressive feeling, but this leads to difficulty in controlling the haptic interface because it should follow the hand poses of the operator. A redundant force control method in which all the joints of the mechanism were force controlled simultaneously to present the virtual force is studied. Experiments to show high potential as multi-fingered haptic interface are presented.

An Immersive Interactive VR-based Training system

This research described a study on the effect of the level of immersion on the collaborative task performance and sense of presence by demonstrating collaborative interactive fully immersive system that can be useful for applications in learning and training. A multiuser virtual environment with intuitive interaction interface was presented to the users using various displaying technology to examine different viewing conditions and the level of immersion. The subjects’ performance was assessed in terms of speed and accuracy of task performance for three different levels of immersion: a desktop, large-monoscopic screen, and immersive large-stereopscopic screen. Furthermore, another dimensions of performance such as level of presence, satisfaction and fatigue. Results indicated that performance in terms of the speed and accuracy increased for higher level of immersion.

The effect of Network delay in Cooperative Shared Haptic Virtual Environment

A cooperative shared haptic virtual environment (CSHVE), where the users can kinesthetically interact and simultaneously feel each other over the network, is beneficial for many distributed VR simulations. A little is known about the influences of the network delay on the quality of haptic sensation and the task performance in such environments. This research has addressed these issues by conducting a subjective evaluation to the force feedback and the task performance in a tele-handshake cooperative shared haptic system for different delay setting. Also, four subjective measures to evaluate the quality of haptic in CSHVEs have been proposed. These measures are the feeling of force, the consistency between the haptic-visual feedback, the vibration, and the rebound in the haptic device. In addition, a detailed description of the haptic sensation for different time delays is also described. A network emulator was utilized to simulate the real network cloud.

Haptic Cooperative Virtual Workspace: Architecture and Evaluation

The Haptic Cooperative Virtual Workspace (HCVW), where users can simultaneously manipulate and haptically feel the same object, is beneficial and in some cases indispensable for training a team of surgeons, or in application areas in telerobotics, and entertainment.

In this research we proposed an architecture for the haptic cooperative workspace where the participants can kinesthetically interact, feel and push each other simultaneously while moving in the simulation. This involves the ability to manipulate the same virtual object at the same time. A setup of experiments carried out to investigate the haptic cooperative workspace is reported. A new approach to quantitatively evaluate the cooperative haptic system is proposed, which can be extended to evaluate haptic systems in general.