Franck Gabriel


Research assistant

Ecole Polytechnique Fédérale de Lausanne

E-mail: franck.gabriel@normalesup.org

Research Interest:

Mathematical Physics,

Neural networks,

CFT and Ising model

Latest news :

More about the new preprints:

Geometric stochastic heat equations (Y. Bruned, F. Gabriel, M. Hairer, L. Zambotti), February 2019

We consider a natural class of multidimensional-valued one-dimensional stochastic PDEs driven by space-time white noise that is formally invariant under the action of the diffeomorphism group. This class contains in particular the KPZ equation, the multiplicative and additive S.H.E. and rough Burgers-type equations. We exhibit a one-parameter family of solution theories with the following properties:

  1. For all SPDEs in our class for which a solution was previously available, every solution in our family coincides with the previously constructed solution, whether that was obtained using Itô calculus (additive and multiplicative stochastic heat equation), rough path theory (rough Burgers-type equations), or the Hopf-Cole transform (KPZ equation).
  2. Every solution theory is equivariant under the action of the diffeomorphism group, i.e. identities obtained by formal calculations treating the noise as a smooth function are valid.
  3. Every solution theory satisfies an analogue of Itô's isometry.
  4. The counterterms leading to our solution theories vanish at points where the equation agrees to leading order with the additive stochastic heat equation.

In particular, points 2 and 3 show that, surprisingly, our solution theories enjoy properties analogous to those holding for both the Stratonovich and Itô interpretations of SDEs simultaneously. For the natural noisy perturbation of the harmonic map flow with values in an arbitrary Riemannian manifold, we show that all these solution theories coincide. In particular, this allows us to conjecturally identify the process associated to the Markov extension of the Dirichlet form corresponding to the L2-gradient flow for the Brownian loop measure.

Scaling description of generalization with number of parameters in deep learning (M. Geiger, A. Jacot, S. Spigler, F. Gabriel, L. Sagun, S. d'Ascoli, G. Biroli, C. Hongler, M. Wyart), January 2019

We provide a description for the evolution of the generalization performance of fixed-depth fully-connected deep neural networks, as a function of their number of parameters N. As N gets large, we observe that increasing N at fixed depth reduces the fluctuations of the output function and characterize the scaling of the fluctuations.

We explain this asymptotic behavior in terms of the fluctuations of the so-called Neural Tangent Kernel that controls the dynamics of the output function. For the task of classification, we predict these fluctuations to increase the true test error and compute the scaling of this true test error. This prediction is consistent with our empirical results on the MNIST dataset, and it explains in a concrete case the puzzling observation that the predictive power of deep networks improves asthe number of fitting parameters grows. For smaller N, this asymptotic description breaks down at a so-called jamming transition, below which the training error is non-zero. In the absence of regularization, we observe an apparent divergence and provide a simple argument suggesting that the power law of this divergence is -1, consistent with empirical observations. This result leads to a plausible explanation for the cusp in test error known to occur at the jamming transition.

Overall, our analysis suggests that once models are averaged, the optimal model complexity is reached just beyond the point where the data can be perfectly fitted, a result of practical importance that needs to be tested on a wide range of architectures and datasets.

Insider trading with penalties (S. Carré, P. Collin-Dufresne, F. Gabriel), Septembre 2018

We establish existence and uniqueness of equilibrium in a generalised one-period Kyle (1985) model where insider trades can be subject to a size-dependent penalty. The result is obtained by considering uniform noise and holds for virtually any penalty function. Uniqueness is among all non-decreasing strategies. The insider demand and the price functions are in general non-linear, yet tractable.

We apply this result to regulation issues. We show analytically that the penalty functions maximising price informativeness for given noise traders' losses eliminate small rather than large trades. We generalise this result to cases where a budget constraint distorts the set of penalties available to the regulator.

Neural Tangent Kernel: Convergence and Generalization in Neural Networks (A. Jacot, F. Gabriel, C. Hongler) to appear in Proceedings of Neural Information Processing Systems 2018 (8-page version)

We study the asymptotic dynamics of neural networks when the widths of the layers tend to infinity. We achieve this by using the "kernel methods" framework. This provides the first asymptotic guarantee for the training of neural networks.



nips-v9.pdf

We prove that independent families of reasonable permutation invariant random matrices are asymptotically free with amalgamation over the diagonal, both in expectation and in probability. This generalises the celebrated result of Voiculescu of asymptotic freeness of unitary matrices. The result still holds even if the matrices are multiplied entrywise by random variables satisfying a certain growth condition (for example, as in the case of matrices with a variance profile and percolation models). Our analysis relies on a modified method of moments based on graph observables.

Conference : I organized the conference "Distributional symmetries and independences", Bordeaux, November, 16 - 17, 2017. You will find some information on the conference website. Funded by a "Projet Exploratoire Premier Soutien Jeunes chercheur-e-s" obtained by the organizers as well as by the the GDR MEGA (Matrices and Random Graphs).

For an introduction to my early research work, you can refer to the thesis introduction available on this website. My thesis, "Holonomy fields and random matrices: symmetries under braiding and permutation", conducted under the direction of Pr. Thierry Lévy, in Paris 6 at the LPMA laboratory, deals with two-dimensional Markovian holonomy fields and the study of random matrices, invariant in law by conjugation by the symmetric group, via the combinatorics of partitions.

Pre-publications

[1] - F. Gabriel: Planar Markovian Holonomy Fields. arXiv:1501.05077, 2015.

[2] - F. Gabriel: Geodesic order on partitions: structures and convergence. arXiv:1503.02792, 2015.

[3] - F. Gabriel: Random matrices in the light of A-tracial algebras and Schur-Weyl-Jones dualities. arXiv:1507.02465, 2015.

[4] - F. Gabriel: Two dimensional S(N)-Yang-Mills theory and random ramified N-coverings of the disk in the large N-limit. (The large N-limit of random walks on S(N)), arXiv:1510.01046, 2015.

[5] - B. Au, G. Cébron, A. Dahlqvist, F. Gabriel, C. Male : Large permutation invariant random matrices are asymptotically free over the diagonal. arXiv:1805.07045, 2018.

[6] - S. Carré, P. Collin-Dufresne, F. Gabriel : Insider Trading with Penalties. hal:1874923, 2018. Presentation at Paris December 2019 Finance Meeting EUROFIDAI.

[7] - Y. Bruned, F. Gabriel, M. Hairer, L. Zambotti : Geometric stochastic heat equations. arXiv:1902.02884, 2019.

[8] - A. Jacot, F. Gabriel, C. Hongler : Freeze and Chaos for DNNs: an NTK view of Batch Normalization, Checkerboard and Boundary Effects. arXiv:1907.05715, 2019

Publications

[9] - G. Cébron, A. Dahlqvist, F. Gabriel : The generalized master fields, arXiv:1601.00214, Journal of Geometry and Physics, Volume 119, September 2017, pp 34-53.

[10] - B. K. Driver, F. Gabriel, B. C. Hall, T. Kemp : The Makeenko-Migdal equation for Yang-Mills theory on compact surfaces, Communications in Mathematical Physics, June 2017, Volume 352, Issue 3, pp 967–978.

[11] - A. Jacot, F. Gabriel, C. Hongler : Neural Tangent Kernel: Convergence and Generalization in Neural Networks, Advances in Neural Information Processing Systems 31, arXiv:1806.07572, 2018.

[12] - M. Geiger, A. Jacot, S. Spigler, F. Gabriel, L. Sagun, S. d'Ascoli, G. Biroli, C. Hongler, M. Wyart : Scaling description of generalization with number of parameters in deep learning, to appear in JSTAT (Journal of Statistical Mechanics: Theory and Experiment), arXiv:1901.01608, 2019.

[13] - A. Jacot, F. Gabriel, C. Hongler : The asymptotic spectrum of the Hessian of DNN throughout training, ICLR 2020 Conference, 2020.

Educational articles

[a] - F. Gabriel: Generalized characteristic polynomials, graphical computations and Caley-Hamilton's theorem.

Applied mathematics

[b] - P. Bochard, S. Carré, R. Catellier, F. Gabriel, V. Letizia, T. Tran: Optimal planning of energy production under technological constraints - (Planification optimale de production d'énergie sous contraintes technologiques - Semaine d'étude mathématiques et entreprises 4. )

Presentations

[c] - Presentations: Cambridge’s probability seminar, Warwick's probability seminar & Montréal PIMS probability summer school.

[d] - Working groups: Basic quantum field theory.

Collaborators :

G. Cébron, A. Dahlqvist, B. K. Driver, B. C. Hall, T. Kemp, C. Male, B. Au, M. Hairer, L. Zambotti, Y. Bruned, C. Hongler, A. Jacot, J. Fageot, S. Carré, P. Collin-Dufresne, M. Geiger, S. Spigler, L. Sagun, S. d'Ascoli, G. Biroli, M. Wyart