Publications

A Framework for Enhancing Depth Perception in Computer Graphics


Zeynep Cipiloglu, Abdullah Bulbul, Tolga Capin. APGV, 2010

This paper introduces a solution for enhancing depth perception in a given 3D computer-generated scene. For this purpose, we propose a framework that decides on the suitable depth cues for a given scene and the rendering methods which provide these cues. First, the system calculates the importance of each depth cue using a fuzzy logic based algorithm which considers the target tasks in the application and the spatial layout of the scene. Then, a knapsack model is constructed to keep the balance between the rendering costs of the graphical methods that provide these cues and their contibution to depth perception. This cost-profit analysis step selects the proper rendering methods. In this work, we also present several objective and subjective experiments which show that our automated depth enhancement system is statistically (p < 0.05) better than the other method selection techniques that are tested. 



Saliency for Animated Meshes with Material Properties


Abdullah Bulbul, Cetin Koca, Tolga Capin, Ugur Gudukbay. APGV, 2010

We propose a technique to calculate the saliency of animated meshes with material properties. The saliency computation considers multiple features of 3D meshes including their geometry, material and motion. Each feature contributes to the final saliency map which is view independent; and therefore, can be used for view dependent and view independent applications. To verify our saliency calculations, we performed an experiment in which we use an eye tracker to compare the saliencies of the regions that the viewers look with the other regions of the models. The results confirm that our saliency computation gives promising results. We also present several applications in which the saliency information is used. 


A Perceptual Approach for Stereoscopic Rendering Optimization


Abdullah Bulbul, Zeynep Cipiloglu, Tolga Capin. Computers & Graphics, 34-2, p.145-157, 2010

The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately; which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simpli?ed in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results veri?ed our hypothesis that a modi?cation, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering.
Comments