XRNeRF 2023

June 18, 2023

East Ballroom B

Vancouver Convention Center, Canada

Overview

A longstanding problem in computer graphics is the realistic rendering of virtual worlds. Generation of highly realistic 3D worlds at scale is an important piece of the Metaverse puzzle. However, creating such worlds and content inside it can be costly and time consuming. 


In 2020, the initial work of new techniques around neural volume rendering also known as NeRF (Neural Radiance Fields) has brought an explosion of new work that has direct applicability to the future metaverse. In CVPR 2022, there were more than 50 accepted papers on NeRF improving fidelity, efficiency and scalability. We believe that NeRF is one of the most viable solutions to address the growing  content needs of Metaverse.


There have been many recent advances in NeRF that have enabled it to be a strong content generation tool. Some of these advances include but are not limited to:  represent arbitrary scenes including unbounded scenes at city scale,  run it on mobile devices, higher fidelity representation of the objects/scene and generative capabilities. This workshop is an opportunity to showcase work on NeRF that expands upon key areas that further the Metaverse development.


The aim of this workshop is to bring  industry innovators and academic leaders in the world to discuss the problems, applications and in general the state of NeRF technology. Specifically, we would like to cover recent advances in NeRF that expands upon the three areas that need significant gains for the metaverse - scale, efficiency, and fidelity. 

Speakers

Christoph Lassner

Research Scientist Lead at Epic Games


Matthias Niessner

Technical University of Munich


Ben Poole

Google Brain


Lourdes Agapito

Professor of 3D Vision, University College London


Amit Jain

Co-Founder, Luma AI


Alex Yu

Co-Founder, Luma AI

Spotlight

3D-Aware Video Generation [slides]

Sherwin Bahmani (TU Darmstadt)*; Jeong Joon Park (Stanford University); Despoina Paschalidou (Stanford); Hao Tang (ETH Zurich); Gordon Wetzstein (Stanford University); Leonidas Guibas (Stanford University); Luc Van Gool (ETH Zurich); Radu Timofte (University of Wurzburg & ETH Zurich)


BundleRecon: Ray Bundle-Based 3D Neural Reconstruction [slides]

Weikun Zhang (Zhejiang University); Jianke Zhu (Zhejiang University)*  


DreamSparse: Escaping from Plato’s Cave with 2D Diffusion Model given Sparse Views [slides]

Paul Yoo (The University of Tokyo)*; Jiaxian Guo (The University of Tokyo); Xin Zhang (The University of Tokyo); Yutaka Matsuo (The University of Tokyo); Shixiang Gu (Cambridge)


FusedRF: Fusing Multiple Radiance Fields [slides

Rahul Goel (IIIT Hyderabad)*; Sirikonda Dhawal (IIIT Hyderabad); Rajvi Shah (Meta Reality Labs); P. J.  Narayanan (IIIT-Hyderabad)


HDR-NeRF--: Learning High Dynamic Range View Synthesis With Unknown Exposure Settings [slides]

Nam Nguyen (California Polytechnic State University, San Luis Obispo); Edward Du (California Polytechnic State University - San Luis Obispo); Jonathan Ventura (California Polytechnic State University, San Luis Obispo)* 


nerf2nerf: Pairwise Registration of Neural Radiance Fields [slides]

Leili Goli (University of Toronto)*; Daniel Rebain (Google Inc.); Sara Sabour (Google); Animesh Garg (University of Toronto, Vector Institute, Nvidia); Andrea Tagliasacchi (Google Brain and University of Toronto) 


One-Shot Neural Fields for 3D Object Understanding [slides]

Valts Blukis (NVIDIA); Taeyeop Lee (KAIST); Jonathan Tremblay (NVIDIA)*; Bowen Wen (NVIDIA); In So Kweon (KAIST, Korea); Kuk-Jin Yoon (KAIST); Dieter Fox (NVIDIA); Stan Birchfield (NVIDIA) 


Partial-View Object View Synthesis via Filtering Inversion [slides]

Fan-Yun Sun (Stanford University); Jonathan Tremblay (NVIDIA)*; Valts Blukis (NVIDIA); Kevin Lin (Stanford University); Danfei Xu (Georgia Institute of Technology); Boris Ivanovic (NVIDIA Research); Peter Karkus (NVIDIA Research); Stan Birchfield (NVIDIA); Dieter Fox (NVIDIA); Ruohan Zhang (Stanford University); Yunzhu Li (Stanford University & University of Illinois at Urbana-Champaign); Jiajun Wu (Stanford University); Marco Pavone (Stanford University); Nick Haber (Stanford University)


SeaThru-NeRF: Neural Radiance Fields in Scattering Media [slides]

Deborah B.H. Levy (University of Haifa)*; Amit Peleg ( Technion Israeli Institute of Technology); Naama Pearl (University of Haifa); Dan Rosenbaum (DeepMind); Derya Akkaynak (University of Haifa); Simon Korman (University of Haifa); Tali Treibitz (University of Haifa) 


SPIDR: SDF-based Neural Point Fields for Illumination and Deformation [slides]

Ruofan Liang (University of Toronto)*; Jiahao Zhang (University of Toronto); Haoda Li (University of California, Berkeley); Chen Yang (Shanghai Jiao Tong University); Yushi Guan (University of Toronto); Nandita Vijaykumar (University of Toronto) 

Organizers

Jon Barron (Google Research)

Jon Barron is a senior staff research scientist at Google, where he works on computer vision and machine learning. He received a PhD in Computer Science from the University of California, Berkeley in 2013, where he was advised by Jitendra Malik, and he received a Honours BSc in Computer Science from the University of Toronto in 2007. He received a National Science Foundation Graduate Research Fellowship in 2009, the C.V. Ramamoorthy Distinguished Research Award in 2013, the PAMI Young Researcher Award in 2020. His works have received awards at ECCV 2016, TPAMI 2016, ECCV 2020, ICCV 2021, CVPR 2022, and the Communications of the ACM (2022).

Angjoo Kanazawa (UC Berkeley)

Angjoo Kanazawa is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. She leads the Kanazawa AI Research (KAIR) lab under BAIR. She received a PhD in Computer Science at the University of Maryland, College Park, where she was advised by David Jacobs. While in graduate school, she visited the Max Planck Institute in Tübingen, Germany, under the guidance of Michael Black. Prior to her faculty position, she worked as a Research Scientist at Google Research, and as a BAIR postdoc at UC Berkeley advised by Jitendra Malik, Alexei A. Efros and Trevor Darrell. Kanazawa's research lies at the intersection of computer vision, computer graphics, and machine learning. She is focused on building a system that can capture, perceive, and understand the complex ways that people and animals interact dynamically with the 3-D world--and can use that information to correctly identify the content of 2-D photos and video portraying scenes from everyday life. She co-organized the first and second workshop on CV4Animals: Animal Behavior Tracking and Modeling, as well as AI for Content Creation and 3D Scene Understanding for Vision, Graphics, and Robotics in CVPR’21, and many others including Women in Computer Vision workshop at ECCV’18. 

Fernando De la Torre  (CMU) 

Fernando De la Torre is a Research Assistant Professor at Carnegie Mellon University. He is the author of more than 200 peer-reviewed publications in top conferences and journals in the topic of computer vision and machine learning. He served as associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence, and regularly acts as an area chair for ECCV, CVPR and ICCV. He founded FacioMetrics, which was acquired by Facebook in 2016. At Facebook he led the efforts in developing the technology for facial feature tracking, person segmentation and other real-time on device technology for people augmentation in Messenger, Instagram, Facebook and Portal.

Peter Vajda (Meta Reality Labs)

Peter is a Research Manager in computer vision at Meta. Before joining Meta in 2014, he was Visiting Assistant Professor in Professor Bernd Girod’s group in Stanford University, Stanford, USA. He was working on a personalized multimedia system and mobile visual search. He received M.Sc. in Computer Science from the Vrije Universiteit, Amsterdam, Netherlands and an M.Sc. in Program Designer Mathematician from Eötvös Loránd University, Budapest, Hungary. He completed his Ph.D. with Prof. Touradj Ebrahimi at the Ecole Polytechnique Fédéral de Lausanne (EPFL), Lausanne, Switzerland, 2012.

Daeil Kim (Meta Reality Labs)

Daeil is an engineering manager leading synthetic data efforts at Meta. Before joining Meta in 2021, he was the CEO / Founder of AI.Reverie, a startup focused on developing a platform for synthetic data generation for a variety of real world computer vision problems and before this he was an ML scientist at the New York Times. He received his Ph.D in computer science from Brown University in 2014, where he focused on scalable machine learning algorithms for Bayesian nonparametric models with Erik Sudderth. He has published several papers at NeurIPS, ICML, AISTATS, and his academic work also spanned the areas of statistical neuroimaging techniques with a focus on neuro-psychiatry with publications in Neuroimage, Human Brain Mapping, and several others. 

Aayush Prakash (Meta Reality Labs)

Aayush is an engineering manager who leads the machine learning team within the synthetic data organization at Reality Labs, Meta. His group works on problems at the juncture of machine learning, computer vision and computer graphics. They tackle challenges in domain adaptation, neural rendering and other sim2real problems for mixed reality. Before joining Meta, he was the head of machine learning at synthetic data startup, AI Reverie. Prior to this, he worked at Nvidia where he spent 6 years on synthetic data research in computer vision. While at Nvidia, his group delivered some of the prominent works in synthetic data creation. He graduated with a B.Tech in E&ECE from Indian Institute of Technology (IIT) Kharagpur, India, in 2010, and MASc in Computer Engineering from University of Waterloo, Canada, in 2013.

Schedule

June 18th, 2023

Venue: East Ball Room B, Vancouver Convention Center

Sessions

08:30-09:00 am       Christoph Lassner

09:00-09:30 am       Lourdes Agapito

09:30-10:00 am Ben Poole

10:00-10:45 am Poster Spotlight

10:45-11:00 am            Break 

11:00-11:30 am Matthias Niessner 

11:30-12:00 pm Amit Jain & Alex Yu

12:00-12:30 pm Panel Discussion

Poster Spotlight

2-3 minutes presentations from the authors


Program Committee

Bichen Wu (Meta)

Peizhao Zhang (Meta)

Mihir Jain (Meta)

Shingo Takagi (Meta)

Zijian He (Meta)

Justin Theiss ( Meta)

Ido Gattegno (Meta)

Ayush Saraf (Meta)

Sarah Watson (Meta)

Cheng Zhang (CMU) 

Yehonathan Litman (CMU)

Lior Yariv (Weizmann Institute of Science)