Neural Fields across Fields:
Methods and Applications of Implicit Neural Representations
ICLR 2023 Workshop
(Thursday 4th May 2023, Kigali, Rwanda)
News
(26/04/2023) Best papers announced
(26/04/2023) Talk titles announced
(26/04/2023) Accepted papers uploaded!
(03/03/2023) Decisions announced
(20/12/2022) Call for papers announced
(28/11/2022) Workshop formally accepted!
Links
Q&A links for each speaker in the schedule
Addressing problems in different science and engineering disciplines often requires solving optimization problems, including via machine learning from large training data. One class of methods has recently gained significant attention for problems in computer vision and visual computing: coordinate-based neural networks parameterizing a field, such as a neural network that maps a 3D spatial coordinate to a flow field in fluid dynamics, or a colour and density field in 3D scene representation. Such networks are often referred to as neural fields. The application of neural fields in visual computing has led to remarkable progress on various computer vision problems such as 3D scene reconstruction and generative modelling, leading to more accurate, higher fidelity, more expressive, and computationally cheaper solutions. The exciting progress has also led to the creation of a vibrant research community.
Given that neural fields can represent spatio-temporal signals in arbitrary input/output dimensions, they are highly general as a tool to reason about real-world observations, be it common modalities in machine learning and vision such as image, 3D shapes, 3D scenes, video, speech/audio or more specialized modalities such as flow fields in physics, scenes in robotics, medical images in computational biology, weather data in climate science. However, though some adjacent fields such as robotics have recently seen an increased interest in this area, most of the current research is still confined to visual computing, and the application of neural fields in other fields is in its early stages.
We thus propose a workshop with the following key goals:
• Bring together researchers from a diverse set of backgrounds including machine learning, computer vision, robotics, applied mathematics, physics, chemistry, biology and climate science to exchange ideas and expand the domains of application of neural fields, including but not limited to vision: image/video/scene/3D geometry reconstruction, robotics: face/body/hand modelling, localization, planning, control, audio: audio/speech processing/generation, physics: solving PDEs, biology: protein structure reconstruction, medical imaging, climate science: weather/climate prediction, general: compression.
• Highlight and discuss recent trends, advances and limitations of neural fields, both in terms of theory and methodology, including but not limited to: conditioning, optimization, meta-learning, representation of input space, architecture, generative modelling, spatial/temporal transformations, neural fields as data, sparsification.
• Provide a forum for the ICLR community to get introduced to and discuss the exciting and growing area of neural fields, and also socialize with a diverse group of peers that have shared research interests. As prospective participants, we primarily target machine learning researchers interested in the questions and foci outlined above. Specific target communities within machine learning include, but are not limited to: robotics, visual computing, computational biology, computational cognitive science, deep learning, and optimization.
Key fundamental questions that we aim to address in this workshop are:
• How could we encourage and facilitate exchange of ideas and collaboration across different research fields that can benefit from applying neural fields?
• How can we improve the architectures, optimization and computation/memory efficiency of neural fields?
• Which metrics and methods should we use to evaluate improvements to neural fields? For example, is reconstruction accuracy measured by PSNR sufficient, and if not, in which cases is it insufficient?
• When should we avoid using neural fields? For example, does it make sense to use neural fields for discrete data such as text and graphs?
• Which tasks can we tackle with neural fields that haven’t yet been explored?
• What representation can we use for neural fields in order to extract high level information from them and solve downstream tasks? What novel architectures do we need to extract such information from these representations?
Organizers
Emilien Dupont
(University of Oxford)
Hyunjik Kim
(DeepMind)
Thu Nguyen-Phuoc
(Meta)
Jonathan Richard Schwarz (DeepMind; UCL)