Publications
Publications
When pre-training hurts LoRA fine-tuning: a dynamical perspective via single-index models, Gibbs Nwemadji, Bruno Loureiro, Jean Barbier (2026), https://doi.org/10.48550/arXiv.2602.02855
When pre-training hurts LoRA fine-tuning: a dynamical perspective via single-index models, Gibbs Nwemadji, Bruno Loureiro, Jean Barbier (2026), https://doi.org/10.48550/arXiv.2602.02855
Statistical physics of deep learning: Optimal learning of a multi-layer perceptron near interpolation, Jean Barbier, Francesco Camilli, Minh-Toan Nguyen, Mauro Pastore, Rudy Skerk, (2025), https://doi.org/10.48550/arXiv.2510.24616
Statistical physics of deep learning: Optimal learning of a multi-layer perceptron near interpolation, Jean Barbier, Francesco Camilli, Minh-Toan Nguyen, Mauro Pastore, Rudy Skerk, (2025), https://doi.org/10.48550/arXiv.2510.24616
See a preview here.
The effect of label noise on the information content of neural representations, Ali Hussaini Umar, Franky Kevin Nando Tezoh, Jean Barbier, Santiago Acevedo, Alessandro Laio (2025), https://doi.org/10.48550/arXiv.2510.06401
The effect of label noise on the information content of neural representations, Ali Hussaini Umar, Franky Kevin Nando Tezoh, Jean Barbier, Santiago Acevedo, Alessandro Laio (2025), https://doi.org/10.48550/arXiv.2510.06401
Generalization performance of narrow one-hidden layer neural networks in the teacher-student setting, Rodrigo Perez Ortiz, Gibbs Nwemadji, Jean Barbier, Federica Gerace, Alessandro Ingrosso, Clarissa Lauditi, Enrico Malatesta (2025), https://doi.org/10.48550/arXiv.2507.00629
Generalization performance of narrow one-hidden layer neural networks in the teacher-student setting, Rodrigo Perez Ortiz, Gibbs Nwemadji, Jean Barbier, Federica Gerace, Alessandro Ingrosso, Clarissa Lauditi, Enrico Malatesta (2025), https://doi.org/10.48550/arXiv.2507.00629
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime, Francesco Camilli, Daria Tieplova, Eleonora Bergamin, Jean Barbier, 38th Conference on Learning Theory (COLT 2025)
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime, Francesco Camilli, Daria Tieplova, Eleonora Bergamin, Jean Barbier, 38th Conference on Learning Theory (COLT 2025)
Information-theoretic limits and approximate message-passing for high-dimensional time series, Daria Tieplova, Samriddha Lahiry, Jean Barbier, International Symposium on Information Theory (ISIT 2025) and IEEE Transactions on Information Theory (2025), https://doi.org/10.48550/arXiv.2501.13625
Information-theoretic limits and approximate message-passing for high-dimensional time series, Daria Tieplova, Samriddha Lahiry, Jean Barbier, International Symposium on Information Theory (ISIT 2025) and IEEE Transactions on Information Theory (2025), https://doi.org/10.48550/arXiv.2501.13625
Phase diagram of extensive-rank symmetric matrix denoising beyond rotational invariance, Jean Barbier, Francesco Camilli, Justin Ko, Koki Okajima (2024), Physical Review X, https://doi.org/10.48550/arXiv.2411.01974
Phase diagram of extensive-rank symmetric matrix denoising beyond rotational invariance, Jean Barbier, Francesco Camilli, Justin Ko, Koki Okajima (2024), Physical Review X, https://doi.org/10.48550/arXiv.2411.01974
See a preview here.
Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise, Jean Barbier, Francesco Camilli, Marco Mondelli, Yizhou Xu, Physical Review Research 7 (2025), https://doi.org/10.1103/PhysRevResearch.7.013081
Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise, Jean Barbier, Francesco Camilli, Marco Mondelli, Yizhou Xu, Physical Review Research 7 (2025), https://doi.org/10.1103/PhysRevResearch.7.013081
A multiscale cavity method for sublinear-rank matrix factorization, Jean Barbier, Justin Ko, Anas Rahman, (2024), https://doi.org/10.48550/arXiv.2403.07189
A multiscale cavity method for sublinear-rank matrix factorization, Jean Barbier, Justin Ko, Anas Rahman, (2024), https://doi.org/10.48550/arXiv.2403.07189
Matrix inference in growing rank regimes, Farzad Pourkamali, Jean Barbier, Nicolas Macris, IEEE Transactions on Information Theory (2024), https://doi.org/10.48550/arXiv.2306.01412
Matrix inference in growing rank regimes, Farzad Pourkamali, Jean Barbier, Nicolas Macris, IEEE Transactions on Information Theory (2024), https://doi.org/10.48550/arXiv.2306.01412
Fundamental limits of overparametrized shallow neural networks for supervised learning, Francesco Camilli, Daria Tieplova, Jean Barbier, Bolletino dell'Unione Matematica Italiana (2025), https://doi.org/10.1007/s40574-025-00506-2
Fundamental limits of overparametrized shallow neural networks for supervised learning, Francesco Camilli, Daria Tieplova, Jean Barbier, Bolletino dell'Unione Matematica Italiana (2025), https://doi.org/10.1007/s40574-025-00506-2
See a talk by Francesco about this work here
Fundamental limits in structured principal component analysis and how to reach them, Jean Barbier, Francesco Camilli, Marco Mondelli, Manuel Sáenz, Proceedings of the National Academy of Sciences (PNAS) 120 (30) e2302028120 (2023), https://doi.org/10.1073/pnas.230202812
Fundamental limits in structured principal component analysis and how to reach them, Jean Barbier, Francesco Camilli, Marco Mondelli, Manuel Sáenz, Proceedings of the National Academy of Sciences (PNAS) 120 (30) e2302028120 (2023), https://doi.org/10.1073/pnas.230202812
See a preview here.