Assistant Professor at the Courant Institute of Mathematical Sciences, NYU. I study deep neural networks and other related statistical models to develop a theory of deep learning.
I was previously a PhD student at the EPFL under the supervision of Clément Hongler.
Publications ( Google Scholar ):
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions, Arthur Jacot, ICLR 2023. [conference paper] [arXiv link]
Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity, Arthur Jacot, Eugene Golikov, Clément Hongler, Franck Gabriel, NeurIPS 2022. [arXiv link]
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity, Arthur Jacot, François Ged, Franck Gabriel, Berfin Simsek, Clément Hongler, 2022. [arXiv link]
DNN-Based Topology Optimization: Spatial Invariance and Neural Tangent Kernel, Benjamin Dupuis, Arthur Jacot, NeurIPS 2021. [arXiv link]
Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances, Berfin Simsek, François Ged, Arthur Jacot, Francesco Spadaro, Clément Hongler, Wulfram Gerstner, Johanni Brea, ICML 2021. [arXiv link]
Kernel Alignment Risk Estimator: Risk Prediction from Training Data, Arthur Jacot, Berfin Simsek, Francesco Spadaro, Clément Hongler, Franck Gabriel, NeurIPS 2020. [Conference Paper] [arXiv link]
Implicit regularization of Random Feature Models, Arthur Jacot, Berfin Simsek, Francesco Spadaro, Clément Hongler, Franck Gabriel, ICML 2020. [Conference Paper] [arXiv link]
The asymptotic spectrum of the Hessian of DNN throughout training, Arthur Jacot, Franck Gabriel, Clément Hongler. ICLR 2020. [Conference Paper] [arXiv link]
Order and Chaos: NTK views on DNN Normalization, Checkerboard and Boundary Artifacts, Arthur Jacot, Franck Gabriel, François Ged, Clément Hongler. MSML 2022. [arXiv link]
Disentangling feature and lazy learning in deep neural networks: an empirical study, Mario Geiger,Stefano Spigler, Arthur Jacot,Matthieu Wyart, Journal of Statistical Mechanics: Theory and Experiment 2020. [Journal link] [arXiv link]
Scaling description of generalization with number of parameters in deep learning, (Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d'Ascoli, Giulio Biroli, Clément Hongler, Matthieu Wyart), Journal of Statistical Mechanics: Theory and Experiment, Volume 2020, February 2020. [Journal link] [arXiv link]
Neural Tangent Kernel: Convergence and Generalization in Neural Networks, Arthur Jacot, Franck Gabriel, Clément Hongler. NeurIPS 2018. [conference paper (8-page version)] [3-minute video] [spotlight slides] [spotlight video], [arXiv link (full version)]
2021 SwissMAP Innovator Prize.
ICLR, Kigali Convention Center (Mai 2023).
Phys4ML, Aspen Center for Physics (Feb 2023).
Math and Data seminar, NYU (Nov 2022).
MSML 2022 (online).
Workshop 'New Interactions Between Statistics and Optimization', BIRS (May 2022).
FLAIR seminar, EPFL (March 2022)
Symposium on the Loss Landscape of Neural Networks, EPFL (online - February 2022).
SwissMAP General Meeting, Les Diablerets (September 2021).
RWTH Aachen University (online - February 2021).
4th Mini-workshop on Deep Learning Theory, Huawei Beijing (online - February 2021).
Mathematics of Machine Learning Seminar, University of California, Los Angeles (online - August 2020).
Online Summer School of Deep Learning Theory, Shanghai Jiao Tong University (online - July 2020).
Data Science Seminar, Shanghai Jiao Tong University (online - May 2020).
DeepMind London (March 2020).
Statistics Seminar, University of Oxford (February 2020).
Neural Net Theory Group, École Polytechnique Fédérale de Lausanne (February 2020).
Google Brain, Mountain View (October 2019).
Analyses of Deep Learning, Stanford University (October 2019).
Theoretical Advances in Deep Learning Workshop, Istanbul (August 2019).
CRiSM day on Bayesian Intelligence, Warwick University (March 2019).
Seminar in Probability: Theory of Deep Learning, Universität Basel (April 2019).
Spotlight Presentation, NeurIPS 2018, Montreal (December 2018).
A New Link to an Old Model Could Crack the Mystery of Deep Learning, Quanta Magazine.
A Deeper Understanding of Deep Learning, Communications of the ACM.