Assistant Professor at the Courant Institute of Mathematical Sciences, NYU. I study deep neural networks and other related statistical models to develop a theory of deep learning.
I was previously a PhD student at the EPFL under the supervision of Clément Hongler.
Contact: arthur.jacot@nyu.edu
Publications ( Google Scholar ):
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning, Arthur Jacot, Seok Hoan Choi, Yuxiao Wen, 2024 [arXiv link]
Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets, Arthur Jacot, Alexandre Kaiser, 2024 [arXiv link]
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes, Zhenfeng Tu, Santiago Aranguri, Arthur Jacot, 2024 [arXiv link]
Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning, Yuxiao Wen, Arthur Jacot, ICML 2024 [conference paper] [arXiv link]
Implicit bias of SGD in L2-regularized linear DNNs: One-way jumps from high to low rank, Zihan Wang, Arthur Jacot, ICLR 2024 [conference paper] [arXiv link]
Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff, Arthur Jacot, NeurIPS 2023 [conference paper] [arXiv link]
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions, Arthur Jacot, ICLR 2023. [conference paper] [arXiv link]
Feature Learning in L2-regularized DNNs: Attraction/Repulsion and Sparsity, Arthur Jacot, Eugene Golikov, Clément Hongler, Franck Gabriel, NeurIPS 2022. [arXiv link]
Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity, Arthur Jacot, François Ged, Franck Gabriel, Berfin Simsek, Clément Hongler, 2022. [arXiv link]
DNN-Based Topology Optimization: Spatial Invariance and Neural Tangent Kernel, Benjamin Dupuis, Arthur Jacot, NeurIPS 2021. [arXiv link]
Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances, Berfin Simsek, François Ged, Arthur Jacot, Francesco Spadaro, Clément Hongler, Wulfram Gerstner, Johanni Brea, ICML 2021. [arXiv link]
Kernel Alignment Risk Estimator: Risk Prediction from Training Data, Arthur Jacot, Berfin Simsek, Francesco Spadaro, Clément Hongler, Franck Gabriel, NeurIPS 2020. [Conference Paper] [arXiv link]
Implicit regularization of Random Feature Models, Arthur Jacot, Berfin Simsek, Francesco Spadaro, Clément Hongler, Franck Gabriel, ICML 2020. [Conference Paper] [arXiv link]
Order and Chaos: NTK views on DNN Normalization, Checkerboard and Boundary Artifacts, Arthur Jacot, Franck Gabriel, François Ged, Clément Hongler. MSML 2022. [arXiv link]
The asymptotic spectrum of the Hessian of DNN throughout training, Arthur Jacot, Franck Gabriel, Clément Hongler. ICLR 2020. [Conference Paper] [arXiv link]
Disentangling feature and lazy learning in deep neural networks: an empirical study, Mario Geiger,Stefano Spigler, Arthur Jacot,Matthieu Wyart, Journal of Statistical Mechanics: Theory and Experiment 2020. [Journal link] [arXiv link]
Scaling description of generalization with number of parameters in deep learning, Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d'Ascoli, Giulio Biroli, Clément Hongler, Matthieu Wyart, Journal of Statistical Mechanics: Theory and Experiment, Volume 2020, February 2020. [Journal link] [arXiv link]
Neural Tangent Kernel: Convergence and Generalization in Neural Networks, Arthur Jacot, Franck Gabriel, Clément Hongler. NeurIPS 2018. [conference paper (8-page version)] [3-minute video] [spotlight slides] [spotlight video], [arXiv link (full version)]
Prize:
2023 Prix EPFL de Doctorat (EPFL PhD thesis prize).
2021 SwissMAP Innovator Prize.
Recent Talks:
Institute of Science and Technology Austria (July 2024).
Statistics and Learning Theory in the Era of Artificial Intelligence, Oberwolfach (June 2024).
Two sigma, New York (June 2024).
DIMACS workshop, Rutgers University (June 2024).
Optimization Seminat, UPenn (Nov. 2023).
Princeton ML Theory summer school (June 2023).
ICLR spotlight presentation, Kigali Convention Center (Mai 2023).
Phys4ML, Aspen Center for Physics (Feb 2023).
Math and Data seminar, NYU (Nov 2022).
Videos:
News Articles:
A New Link to an Old Model Could Crack the Mystery of Deep Learning, Quanta Magazine.
A Deeper Understanding of Deep Learning, Communications of the ACM.