Emanuele Francazi

Do DNNs Dream of Electric Sheep?

As a doctoral candidate at EPFL, my research centers on addressing biases in deep neural networks, focusing on their origins, impacts on learning processes, and the development of comprehensive strategies for their control. Supported by Marco Baity-Jesi, Florent Krzakala, and Aurelien Lucchi, my work aims at theoretical advancements and practical applications.

My academic background is in theoretical physics, which I pursued at University La Sapienza, Rome. For my thesis projects, I had the opportunity to collaborate with Professors Giorgio Parisi and Federico Ricci Tersenghi, exploring spin glasses and the effects of degree distribution on critical phenomena in graphs.



I am actively seeking an internship to leverage and expand my skills in innovative research environments.

Here you can find my full CV.

Contact: emanuele.francazi@epfl.ch

Publications:

 We prove that untrained neural networks can unevenly distribute their guesses among different classes. This is due to a node-permutation symmetry breaking, caused by architectural elements such as activations, depth and max-pooling. 


We present a theoretical analysis of how class imbalance affects (S)GD and its variants, proving convergence and identifying conditions for improved per-class performance, highlighting how imbalance affects GD and SGD differently.

(Open) Projects: 

If any of the following open projects intrigue you and you would like to collaborate with us, feel free to contact me.


How architecture design induces predictive bias on untrained neural networks.

How class imbalance affects the learning process.