10:10am - 10:50am
Dr. Jonathan Siegel
Mathematics, Texas A&M University
Title: Approximation Theory of ReLUk Neural Networks
Abstract: A natural space of functions which can be efficiently approximated by shallow neural networks is the variation space corresponding to the dictionary of single neuron outputs. We will precisely define this space and study its approximation properties. Specifically, we develop techniques for bounding the metric entropy and n-widths of the unit ball in this variation space. These are fundamental quantities in approximation theory that control the limits of linear and non-linear approximation. Consequences of these results include: the optimal approximation rates which can be attained for shallow neural networks, that shallow neural networks dramatically outperform linear methods of approximation, and indeed that shallow neural networks outperform all stable methods of approximation on the associated variation space. Finally, we introduce a class of greedy algorithms and show that they construct asymptotically optimal shallow neural network approximations.