Machine Learning in Infinite Dimensions
Bath, 5-9 August 2024
Photo by Ferla Paolo Photography
Description
Lifting high-dimensional problems to an infinite-dimensional space and designing algorithms in that setting has been a fruitful idea in many areas of applied mathematics, including inverse problems, optimisation, and partial differential equations. This approach is sometimes referred to as "optimise-then-discretise" and allows the development of algorithms that are inherently dimension- and discretisation-independent and can perform better in high-dimensions. In the context of machine learning, this paradigm can be rephrased as "learn-then-discretise".
The Machine Learning in Infinite Dimensions workshop aims to bring together researchers that work on different aspects of infinite-dimensionality in machine learning. Topics include, but are not restricted to, Gaussian process regression, operator learning, function spaces of neural networks, and measure transport.
Confirmed speakers
Ricardo Baptista (California Institute of Technology)
Nicolas Boullé (University of Cambridge/Imperial College London)
Christoph Brune (University of Twente)
Andrew B Duncan (Imperial College London)
Jean Feydy (INRIA)
Manuela Girotti (Emory University)
Sophie Langer (University of Twente)
Caroline Moosmueller (University of North Carolina at Chapel Hill)
Nicole Mücke (Technical University of Braunschweig)
Nicholas Nelsen (California Institute of Technology/Massachusetts Institute of Technology)
Sebastian Neumayer (Chemnitz University of Technology)
Domènec Ruiz-Balet (Imperial College London)
Cong Shi (University of Vienna)
Christoph Schwab (ETH Zürich)
Gabriele Steidl (Technical University of Berlin)
Marius Zeinhofer (Freiburg University Hospital)
Organisers
Tatiana Bubba (University of Bath)
Bamdad Hosseini (University of Washington)
Yury Korolev (University of Bath)
Matthew Thorpe (University of Warwick)