Joint TILOS and OPTML++ Seminar

This seminar is dedicated to work at the intersection of optimization and machine learning, while keeping an eye out for wider connections to related areas (e.g., statistics, signal processing, robotics, information theory, functional analysis, geometry, etc.; this wider net is where the "++" in the name comes from). OPTML++ seminars will cover both novel developments as well as fundamental concepts.

This semester the seminar takes place every other Wednesday at 4pm ET (with a few exceptions). You can also find recordings of most past talks in the "Past Talks" section. 

Announcements (and Zoom link) related to this group are distributed on our mailing list. If possible, please use your institutional email when joining.  

Next Talk

Monday, November 8, 2023 at 11am ET

Title: Optimization, Robustness and Privacy in Deep Neural Networks: Insights from the Neural Tangent Kernel

Speaker: Marco Mondelli

Institute of Science and Technology Austria

Abstract. A recent line of work has analyzed the properties of deep over-parameterized neural networks through the lens of the Neural Tangent Kernel (NTK). In this talk, I will show how concentration bounds on the NTK (and, specifically, on its smallest eigenvalue) provide insights on (i) the optimization of the network via gradient descent, (ii) its adversarial robustness, and (iii) its privacy guarantees.

I will start by proving tight bounds on the smallest eigenvalue of the NTK for deep neural networks with minimum over-parameterization. This implies that the network optimized by gradient descent interpolates the training dataset (i.e., reaches 0 training loss), as soon as the number of parameters is information-theoretically optimal. Next, I will focus on two properties of the interpolating solution: robustness and privacy. A thought-provoking paper by Bubeck and Sellke has proposed a “universal law of robustness”: interpolating smoothly the data necessarily requires many more parameters than simple memorization. By providing sharp bounds on random features (RF) and NTK models, I will show that, while the RF model is never robust (regardless of the over-parameterization), the NTK model saturates the universal law of robustness, addressing a conjecture by Bubeck, Li and Nagaraj. Finally, I will study the safety of RF and NTK models against a family of powerful black-box information retrieval attacks: the proposed analysis shows that safety provably strengthens with an increase in the generalization capability, unveiling the role of the model and of its activation function. 

Bio. Marco Mondelli received the B.S. and M.S. degree in Telecommunications Engineering from the University of Pisa, Italy, in 2010 and 2012, respectively. In 2016, he obtained his Ph.D. degree in Computer and Communication Sciences at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. He is currently an Assistant Professor at the Institute of Science and Technology Austria (ISTA). Prior to that, he was a Postdoctoral Scholar in the Department of Electrical Engineering at Stanford University, USA, from February 2017 to August 2019. He was also a Research Fellow with the Simons Institute for the Theory of Computing, UC Berkeley, USA, for the program on Foundations of Data Science from August to December 2018. His research interests include data science, machine learning, information theory, and modern coding theory. He was the recipient of a number of fellowships and awards, including the Jack K. Wolf ISIT Student Paper Award in 2015, the STOC Best Paper Award in 2016, the EPFL Doctorate Award in 2018, the Simons-Berkeley Research Fellowship in 2018, the Lopez-Loreta Prize in 2019, and Information Theory Society Best Paper Award in 2021.