Program

The scientific activities of the  conference will feature

The schedule can be found here as a pdf or below.  A list of abstracts can be found here.

All events will be held in the Blocker Building, Room 166, unless noted differently. Registration and breakfast will be in Blocker 140 and 141.

Please register here for the conference dinner.

Plenary Speakers

Yuejie Chi

Carnegie Mellon University

Roman Vershynin

University of California-Irvine

Martin Wainwright 

Massachusetts Institute of Technology

Stephen Wright

University of Wisconsin-Madison

Special sessions

Five special sessions are planned based on CAMDA's pilot projects:

This session focuses on the removal of statistical assumptions and on the analysis of worst-case performance, while ensuring computability of learning/recovery schemes, their robustness, etc. The speakers are Andrea Bonito,  Sergiy Borodachov, Houman Owhadi, and Grigoris Paouris.

This session focuses on the types of functions that can be well approximated by neural networks of a given architecture and on the algorithms through which such neural networks be found efficiently in practice. The speakers are Bruno Després, Hrushikesh Mhaskar, Guergana Petrova, and David Stewart.

This session focuses on the exploitation of a priori structures to learn/recover from few data and on active/directed learning to address the choice of the few datasites, ideally chosen deterministically. The speakers are Santhosh Karnik, Anna Ma, Eric Price, and Ming Zhong.

This session focuses on model-reduction methods to efficiently estimates parameters from data and on creating mathematics and numerics to understand how to best use deep-learning-inspired methods. The speakers are  Ulisses Braga-Neto, Jiequn Han, and Yulong Lu.

This session focuses on sampling and recovering in reproducing kernel Hilbert spaces and on Gaussian processes with non-Gaussian distributed observations. The speakers are Joe Guinness, Francis Narcowich, Ozge Surer, and Rui Tuo.

Tentative schedule 

(subject to changes closer to the event)

NNA = Neural Network Approximation,   LDPC = Learning in Data-Poor Conditions,   RBF/GPA = Radial Basis Function and Gaussian Process Approximation,   PDE = Deep Learning Methods for Partial Differential Equations,   COR = Contemporary Optimal Recovery

A list of abstracts can be found here.