A Unified Deep Learning Approach for Foveated Rendering & Novel View Synthesis from Sparse
RGB-D Light Fields
Vineet Thumuluri, Mansi Sharma
In Proceedings of 2020 International Conference on 3D Immersion (IC3D 2020), Brussels, Belgium
A Unified Deep Learning Approach for Foveated Rendering & Novel View Synthesis from Sparse
RGB-D Light Fields
Vineet Thumuluri, Mansi Sharma
In Proceedings of 2020 International Conference on 3D Immersion (IC3D 2020), Brussels, Belgium
Abstract
Near-eye light field displays provide a solution to visual discomfort when using head mounted displays by presenting accurate depth and focal cues. However, light field HMDs require rendering the scene from a large number of viewpoints. This computational challenge of rendering sharp imagery of the foveal region and reproduce retinal defocus blur that correctly drives accommodation is tackled in this paper. We designed a novel end-to-end convolutional neural network that leverages human vision to perform both foveated reconstruction and view synthesis using only 1.2% of the total light field data. The proposed architecture comprises of log-polar sampling scheme followed by an interpolation stage and a convolutional neural network. To the best of our knowledge, this is the first attempt that synthesizes the entire light field from sparse RGB-D inputs and simultaneously addresses foveation rendering for computational displays. Our algorithm achieves fidelity in the fovea without any perceptible artifacts in the peripheral regions. The performance in fovea is comparable to the state-of-the-art view synthesis methods, despite using around 10Ă— less light field data.
Citation
V. Thumuluri and M. Sharma, "A Unified Deep Learning Approach for Foveated Rendering & Novel View Synthesis from Sparse RGB-D Light Fields," 2020 International Conference on 3D Immersion (IC3D), Brussels, Belgium, 2020, pp. 1-8, doi: 10.1109/IC3D51119.2020.9376340.