Research

Current Projects

1. Interaction between static visual cues and force feedback on perceived mass of virtual objects

Wenyan Bi  Jonathan Newport  Bei Xiao

American University

ACM Symposium of Applied Perception 2018 

Download: [ Paper PDF (32.1MB) ]

Abstract

We create a virtual environment which allows concurrent and spatially aligned visual-haptic presentation of virtual objects (leftmost two columns). Participants use the haptic force feedback device (second column from the left) to manipulate the virtual object and feel its weight. The object is rendered with four materials (steel, stone, fabric, and wood) and two different sizes. We independently vary the visual appearance (i.e. material and size) as well as the weight of the object, and measure how visual and haptic information influence mass perception in the virtual environment. 

We discover that participants’ perceived mass is highly correlated with the ground-truth mass output by the tactile device. Different from the classical material weight illusion (MWI), however, participants consistently rate heavy-looking objects (e.g steel) heavier than light-looking ones (e.g. fabric).

Supplementary

2. Estimating mechanical properties of cloth from videos using dense motion trajectories: Human psychophysics and machine learning. 

Wenyan Bi1   Peiran Jin  Hendrikje Nienborg  Bei Xiao

Journal of Vision (2018) 18(5):0. 1-20

Download: [ Paper PDF (2.1MB) ]  

Abstract

Humans can visually estimate the mechanical properties of deformable objects (e.g., cloth stiffness). While much of the recent work on material perception has focused on static image cues (e.g., textures and shape), little is known whether humans can integrate information over time to make a judgment. Here, we investigate the effect of spatiotemporal information across multiple frames (multi-frame motion) on estimating the bending stiffness of cloth. Using high-fidelity cloth animations, we first examined how the perceived bending stiffness changed as a function of the physical bending stiffness defined in the simulation model. Using maximum likelihood difference scaling methods (MLDS) we found that the perceived stiffness and the physical bending stiffness were highly correlated. A second experiment in which we scrambled the frame sequences diminished this correlation. This suggests that multi-frame motion plays an important role. To provide further evidence for this finding, we extracted dense motion trajectories from the videos across 15 consecutive frames and used the trajectory descriptors to train a machine-learning model with the measured perceptual scales. The model can predict human perceptual scales in new videos with varied winds, optical properties of cloth, and scene setups. When the correct multi-frame was removed (using either scrambled videos or 2-frame optical flow to train the model), the predictions significantly worsened. Our findings demonstrate that multi-frame motion information is important for both humans and machines to estimate the mechanical properties. In addition, we show that dense motion trajectories are effective features to build a successful automatic cloth estimation system.