Photo by Adrian Michael, CC BY 2.5
Description
Lifting high-dimensional problems to an infinite-dimensional space and designing algorithms in that setting has been a fruitful idea in many areas of applied mathematics, including inverse problems, optimisation, and partial differential equations. This approach is sometimes referred to as "optimise-then-discretise" and allows the development of algorithms that are inherently dimension- and discretisation-independent and can perform better in high-dimensions. In the context of machine learning, this paradigm can be rephrased as "learn-then-discretise".
This is the second Machine Learning in Infinite Dimensions workshop; the first one was held at the University of Bath in 2024. We aim to bring together researchers that work on different aspects of infinite-dimensionality in machine learning. Topics include, but are not restricted to, Gaussian process regression, operator learning, function spaces of neural networks, and measure transport.
Confirmed speakers
Nicolas Boullé (Imperial College London)
Takashi Furuya (Doshisha University, Kyoto)
Lukas Gonon (University of St. Gallen)
Tim Jahn (TU Berlin)
Yury Korolev (University of Bath)
Samuel Lanthaler (University of Vienna)
Jonas Latz (University of Manchester)
Siddhartha Mishra (ETH Zürich)
Nicole Mücke (TU Braunschweig)
Nicholas Nelsen (Cornell University)
Ariel Neufeld (Nanyang Technological University)
Houman Owhadi (California Institute of Technology)
Michael Puthawala (South Dakota State University)
Marius Zeinhofer (ETH Zürich)
Zhengang Zhong (University of Warwick)
PhD talks
Simone Brivio (Politecnico di Milano)
Hefin Lambley (University of Warwick)
Anna Shalova (TU Eindhoven)
Tobias Weber (University of Tübingen)
Josephine Westermann (University of Heidelberg)
Organisers
Yury Korolev (University of Bath)
Siddhartha Mishra (ETH Zürich)
Christoph Schwab (ETH Zürich)
Matthew Thorpe (University of Warwick)