Do DNNs Dream of Electric Sheep?
I am a doctoral candidate at EPFL, where I study predictive bias and learning dynamics in neural networks. My work aims to improve the reliability and efficiency of deep learning systems through theoretically grounded approaches.
In my recent research, I have investigated how biases emerge in neural networks, how they affect learning, and how principled design choices can help control them. I am also exploring theoretically grounded sampling methods to enhance the efficiency of generative models. These projects are carried out in collaboration with Marco Baity-Jesi, Florent Krzakala, Aurelien Lucchi, Giulio Biroli, Marc Mézard, and Jean-Philippe Bouchaud.
Before joining EPFL, I studied theoretical physics at La Sapienza University of Rome, where I worked with Giorgio Parisi and Federico Ricci Tersenghi on spin glasses and critical phenomena in graphs.
I am actively seeking an internship to leverage and expand my skills in innovative research environments.
Here you can find my CV:
Contact: emanuele.francazi@epfl.ch
EurIPS 2025 — Invited talk at Unifying Perspectives on Learning Biases (December 2025, Copenhagen)
Rockin’ AI 2025 — Talk (September 2025, Roccella Ionica)
StatPhys 2025 — Talk (July 2025, Florence)
Main Publications:
Initial Guessing Bias: How Untrained Networks Favor Some Classes, Emanuele Francazi, Aurelien Lucchi, Marco Baity-Jesi, ICML 2024 [Conference paper] [arXiv link ] [talk ] [GitHub project page]
We prove that untrained neural networks can unevenly distribute their guesses among different classes. This is due to a node-permutation symmetry breaking, caused by architectural elements such as activations, depth and max-pooling.
Where You Place the Norm Matters: From Prejudiced to Neutral Initializations, Emanuele Francazi, Francesco Pinto Aurelien Lucchi, Marco Baity-Jesi, [arXiv link ] [GitHub project page]
Normalization type and position shape prediction behavior at initialization; our theory shows that BatchNorm and LayerNorm differ fundamentally, and highlights the critical role of normalization placement within layers.
When the Left Foot Leads to the Right Path: Bridging Initial Prejudice and Trainability, Alberto Bassi, Carlo Albert, Aurelien Lucchi, Marco Baity-Jesi, Emanuele Francazi, [arXiv link ]
We prove the theoretical correspondance between order/chaos phase transition and initIal guessing bias in the mean-field regime.
A Theoretical Analysis of the Learning Dynamics under Class Imbalance, Emanuele Francazi, Marco Baity-Jesi, Aurelien Lucchi, ICML 2023 [Conference paper ] [arXiv link][5-minutes talk ] [GitHub project page]
We present a theoretical analysis of how class imbalance affects (S)GD and its variants, proving convergence and identifying conditions for improved per-class performance, highlighting how imbalance affects GD and SGD differently.
Main (recorded) Talks and presentations:
(Open) Projects:
If any of the following open projects intrigue you and you would like to collaborate with us, feel free to contact me.
How architecture design induces predictive bias on untrained neural networks.
How class imbalance affects the learning process.