We focus on the way human emotional information is extracted from the movement of professional dancer. Specifically, We applied Rudolf Laban's dance theory to extract physical features from dance sequence in realtime and mapped this information to predefined emotional categories. In this way, We can relate physical motion with mental emotion.
We introduce a novel tracking system based on invisible markers which are created/drawn with an IR fluorescent pen. The tracking system consists of a scene camera, an IR camera, and a half mirror. The two cameras are positioned in each side of half mirror so that their optical centers coincide with each other. We track the invisible markers using the IR camera and visualize AR in the view of the scene camera. Thus, it works as a robust invisible-marker-based tracking system.
We present an intuitive method for analyzing and recognizing facial expression based on the motion energy of facial features. Our method does not exploit a complicated and time-consuming algorithm such as 3D modeling of human face or feature tracking and modeling. Our method does not rely on the heuristic system i.e. FACS. Instead, we demonstrate that an extremely simple, biologically plausible motion energy detector can accurately analyze and recognize facial expressions.
We present a method that analyzes the quality of "disguised walking" based on shape context of silhouette images. "Disguised walking" indicates walking with various types of bags or clothes. Two cues are considered to characterize a gait: One corresponds to biometric shape cue (static cue) such as body height, width, shape, and body-part proportion; the other corresponds to motion cue (dynamic cue) such as stride length/style and amount/style of arm swing. Experimental results demonstrate that the style of "ordinary walking" is mainly determined by static cue but that of "disguised walking" by dynamic cue.
A linear method that calibrates a camera from a single view of two concentric semicircles of known radii is presented. Using the estimated centers of the projected semicircles and the four corner points on the projected semicircles, the focal length and the pose of the camera are accurately estimated in real-time. Our method is applied to augmented reality applications and its validity is verified.
First, we propose a simplized approach to relighting an object based on the separation of specular and diffuse reflection. The method requires two or more images taken under the condition that the position of the object is fixed but the lighting condition is different. However, it is impossible to obtain such images in case of moving objects. Second, we propose a method that computationally obtains the synchronized images using the consecutive fields of a controlled video sequence containing a moving object. Thus, the relighting method for static objects is applicable to moving object as it is.
A method that extracts the 3-D shape and movement of lip and tongue and displays them simultaneously is presented. The movement data of lip and tongue is obtained using multiple cameras and a magnetic resonance imaging (MRI) device, separately. An OpenGL based program is implemented to visualize the data interactively. The acquired data would be used for making a high-quality lip-synchronized animation including tongue movement.
Specular reflection can be avoided by redundantly illuminating the display surface using multiple overlapping projectors and cameras. They are automatically estimated where specular reflection occurs and which projector generates the specular reflection. Then, the light of the projectors falling on the specular region is blanked while the other projectors’ light is boosted to compensate the change. Thus, the whole projection is not changed.
The procedure of reducing out-of-focus projection blur is as follows. First, we project an image on the screen, and then capture the displayed image on the screen with a camera. Here we have two input images, one is what we would like to project onto the screen (projector input image), and the other is the displayed image captured by the camera (projection image). Second, we rectify the geometrical skew of the projection image to register the projection image with the projector input image. Third, we estimate the spatially-varying PSFs on the rectified image by comparing the rectified image with the different comparison images generated by blurring projector input image using different PSFs. Finally, we pre-correct the projector input image based on the estimated PSFs.
We propose an integrated framework for dealing with the issues regarding undistorted projection onto nonplanar surfaces: geometric calibration/correction; radiometric compensation; even projection; viewpoint dependent projection; etc. Currently, the component algorithms confronting to the issues are being developed.
We propose several component techniques required for conventional model-based camera tracking to be adaptive to dynamic environments.
We propose an edge-based blur metric. In our metric, the edge slope is computed by averaging the gradients of points between x_S and x_E (see the left image which is the profile of an edge).