Even more than 70 years after the invention of holography, watching an analog film hologram still seems like pure magic. The probably most fascinating feature is the seemingly perfect reconstruction of the recorded object that can be viewed from every possible perspective. Besides its technically intriguing aspect, analog holography has also become a unique form of art over the years, and the works of “holography artists” can be found in major museum collections nowadays. Being “active material,” however, these works of art are subject to constant degradation, which raises the question on how the unique impression of watching them can be “digitally preserved” for future generations – ideally without capturing terabytes of data.
In this contribution, we introduce a method to render the visual contents of analog film holograms from sparse image data that can be captured in seconds with off-the-shelf devices (e.g., mobile phones). Our approach is based on Neural Radiance Fields (NeRF) which is a learning-based method to generate novel views of complex volumetric scenes. We will show free-viewpoint-videos of captured holograms as well as quantitative analysis of viewpoint consistency.
Spiderman Hologram Novel View Synthesis With NeRF
Train Hologram Novel View Synthesis With NeRF
* This work has accepted to DGaO'23 Proceedings.
** This project was developed while affiliated to the Department of Computer Science, at Northwestern University.