Elisabetta Cornacchia
I'm a Postdoctoral Associate at MIT, working with Prof. Elchanan Mossel. I received my PhD in Mathematics from EPFL (Lausanne, Switzerland), where I was advised by Prof. Emmanuel Abbé.
Research Interests: Theory of neural networks, inference on random graphs.
Here is a short CV.
Address: Office 2-241, Simons Building, 77 Massachusetts Avenue, Cambridge MA, 02139.
Email: ecornacc [at] mit [dot] edu
Publications:
E. Abbe, E. Cornacchia, A. Lotfi. Provable Avantage of Curriculum Learning on Parity Targets with Mixed Inputs. NeurIPS 2023. [arXiv ]
E. Cornacchia, E. Mossel. A Mathematical Model for Curriculum Learning for Parities. ICML 2023. [arXiv]
E. Abbe, S. Bengio, E. Cornacchia, J. Kleinberg, A. Lotfi, M. Raghu, C. Zhang. Learning to reason with neural networks: Generalization, unseen data and Boolean measures. NeurIPS 2022. [arXiv]
E. Abbe, E. Cornacchia, J. Hązła, C. Marquis. An initial alignment between neural network and target is needed for gradient descent to learn. ICML 2022. [arXiv]
E. Cornacchia*, F. Mignacco*, R. Veiga*, C. Gerbelot, B. Loureiro, L. Zdeborova. Learning curves for the multi-class teacher-student perceptron. Machine Learning: Science and Technology, 2022. [arXiv]
E. Abbe, E. Cornacchia, Y. Gu, Y. Polyanskiy. Stochastic block model entropy and broadcasting on trees with survey. COLT 2021 Best Student Paper Award. [arXiv]
E. Cornacchia, J. Hązła. Intransitive dice tournament is not quasirandom. Submitted, 2020. [arXiv]
E. Cornacchia*, N. Singer, E. Abbe. Polarization in attraction-repulsion models. ISIT 2020. [arXiv]
*: denotes equally contributing first authors. In other papers, authors are listed in alphabetical order.
Research talks:
Learning with neural networks: Generalization, unseen data and Boolean measures, 2022 Mathematical and Scientific Foundations of Deep Learning Annual Meeting, Sept. 2022, Simons Foundation, New York City, US.
An initial alignment between neural network and target is needed for gradient descent to learn, Youth in High Dimensions, June 2022, ICTP, Trieste, Italy.
An initial alignment between neural network and target is needed for gradient descent to learn, DISMA-Eccellenza, June 2022, Polytechnic of Turin, Turin, Italy.
An initial alignment between neural network and target is needed for gradient descent to learn, MoDL monthly meeting, May 2022, online.
Stochastic block model entropy and broadcasting on trees with survey, Workshop in Rigorous Evidence for Information-Computation Trade-offs, Sept. 2021, EPFL, Lausanne, Switzerland.
Stochastic block model entropy and broadcasting on trees with survey, COLT 2021 (with Y. Gu), August 2021, online and Boulder (Colorado), US.
Polarization in attraction-repulsion models, Swiss Winter School on Theoretical Computer Science, Feb. 2020, Zinal, Switzerland
News article:
Quanta Magazine: Mathematicians Roll Dice and Get Rock-Paper-Scissors