Romain Chor
Ph.D. student in Statistical Learning
Ph.D. student in Statistical Learning
I started my Ph.D. journey in January 2022 (now in the third year!) at Huawei Paris Research Center and Laboratoire d'Informatique Gaspard Monge (LIGM, Université Gustave Eiffel).
My current research interests focus on theoretically characterizing the generalization performance of distributed learning algorithms (in particular Federated Learning), in particular via information-theoretic tools and developing generalization-aware algorithms.
I am supervised by Abdellatif Zaidi (Huawei Paris Research Center & LIGM) and Abderrezak Rachedi (LIGM).
Feel free to visit my GitHub repository dedicated to Data Science, containing several of some recent and undergraduate projects: Link.
Papers & Preprints
Heterogeneity Matters Even More in Distributed Learning: Study from Generalization Perspective.
M. Kavian, R. Chor, M. Sefidgaran and A. Zaidi. Under review by Neural Information Processing Systems 39 (NeurIPS), May 2025. Link.
Generalization Error of Federated Learning & Clients-Server Communication: Bounds, Algorithms for their Computation and Implications.
R. Chor, A. Zaidi and M. Sefidgaran. Under review by IEEE Transactions on Information Theory, March 2025.
Lessons from Generalization Error Analysis of Federated Learning: You May Communicate Less Often!
M. Sefidgaran, R. Chor, A. Zaidi and Y. Wan. Accepted at Forty-first International Conference on Machine Learning (ICML), 2024. Link.
More Communication Does Not Result in Smaller Generalization Error in Federated Learning.
R. Chor, M. Sefidgaran and A. Zaidi. International Symposium on Information Theory (ISIT), 2023. Link.
Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning.
M. Sefidgaran, R. Chor and A. Zaidi. Neural Information Processing Systems 35 (NeurIPS), 2022. Link.
Other research activities
I am currently preparing a monograph paper on the topic of distributed learning algorithms & their generalization analysis, with several co-authors.
The paper will be available end of 2025.
I have had the opportunity to serve as a reviewer for the following conferences and journals:
NeurIPS 2024, AIStats 2025, ICML 2025, ECAI 2025, IEEE Transactions on Information Theory
Probability theory (~26h)
Fall 2024. École supérieure d’ingénieurs en électrotechnique et électronique (ESIEE Paris)
Statistics (~16h)
Fall 2023 and Fall 2024. École supérieure d’ingénieurs en électrotechnique et électronique (ESIEE Paris).
Coding theory (~26h)
Fall 2023. École supérieure d’ingénieurs en électrotechnique et électronique (ESIEE Paris).
Ph.D. student in Statistical Learning
2022 - Today. Huawei Paris Research Center (Huawei Technologies France) and Laboratoire d'Informatique Gaspard Monge (Université Gustave Eiffel).
Research engineer in Statistical Learning
Nov.-Dec. 2019. Laboratoire d'Informatique Gaspard Monge (Université Gustave Eiffel).
Data Scientist Intern
2020. Safety Line, Paris.
M.Sc. in Statistics.
2019 - 2020. Sorbonne Université, Paris.
France Information Theory PhD Workshop, June 2024. Palaiseau, France.
Oral presentation of my PhD work and some recent results at Telecom Paris. Link
International Symposium on Information Theory (ISIT), July 2023. Taipei, Taiwan.
Oral presentation of published paper "More Communication Does Not Result in Smaller Generalization Error in Federated Learning".
London Symposium on Information Theory, 2023. May 2023. London, United Kingdom.
Poster presentation.
"NeurIPS in Paris". December 2022. Paris, France.
Poster presentation of published paper at NeurIPS 2022 conference "Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning".