Neural Fields across Fields: 

Methods and Applications of Implicit Neural Representations

ICLR 2023 Workshop

(Thursday 4th May 2023, Kigali, Rwanda)

Invited Speakers

Eric R. Chan


is a PhD student at the Stanford Computational Imaging Lab, advised by Gordon Wetzstein and Jiajun Wu. His research interests include 3D vision and graphics, and has led various impactful research projects on generative modelling of neural radiance fields.

Angjoo Kanazawa


is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Her research lies at the intersection of computer vision, computer graphics, and machine learning. She is focused on building a system that can capture, perceive, and understand the complex ways that people and animals interact dynamically with the 3D world–and can used that information to correctly identify the content of 2D photos and video portraying scenes from everyday life. She has been involved in in various important contributions to the neural fields literature.

Khemraj Shukla


Khemraj Shukla received his Ph.D. degree in computational geophysics. In his Ph.D., he studied high-order numerical methods for hyperbolic systems and finished his research work with GMIG Group of Rice University. He is an assistant professor in the Division of Applied Mathematics at Brown University, Providence, Rhode Island, 02906, USA. His research focuses on the development of scalable codes on heterogeneous computing architectures.

Matthew Tancik


is a PhD student in computer science and electrical engineering at UC Berkeley. His research interests include computer vision, computational imaging and graphs, with a particular focus on 3D reconstruction. He is a first author on the seminal NeRF paper and has authored several other important papers on neural fields and 3D scene representations.



Jakob Uszkoreit


is a co-founder of Inceptive, a startup for designing RNA molecules via highly scalable experiments and deep learning. Before Inceptive, he led the Berlin branch of the Brain team in Google Research, built the language understanding team of the Google Assistant and worked on Google Translate during its early days. He is also the co-author of the seminal paper that introduced the Transformer architecture, and has been involved in research that builds scene representations using Transformers via neural radiance fields.

Xiaolong Wang


is an Assistant Professor of the ECE department at the University of California, San Diego. He is affiliated with the CSE department, Center for Visual Computing, Contextual Robotics Institute, Artificial Intelligence Group, and the TILOS NSF AI Institute. His research focuses on the intersection between computer vision and robotics. He is particularly interested in learning visual representation from videos in a self-supervised manner and to use these representations to guide robots to learn.

Ellen Zhong


is an assistant professor of computer science at Princeton University. Her research interests lie at the intersection of AI and biology, with a particular focus on protein structure. She completed her PhD at MIT working with Bonnie Berger and Joey Davis on neural methods for 3D reconstruction of dynamic protein structure from cryo-EM images. She has done pioneering work on applying neural fields to problems in biology.