About me
I am a Research Scientist at DeepMind Paris, where I work on scalable machine learning.
Previously I worked as a PostDoc in Lorenzo Rosasco’s Laboratory for Computational and Statistical Learning. Before that I received my PhD in 2017 from INRIA Lille under the supervision of Michal Valko and Alessandro Lazaric working on scalable sequential learning, and even before my Master with Marcello Restelli's group at Politecnico di Milano working on safety and efficiency in reinforcement learning.
My research focuses on adaptive dimensionality reduction techniques using randomized subsampling and sketching. These techniques have been successfully applied (2014-2018) to optimization of noisy function, learning on graphs, clustering and supervised regression. My recent interest (2018-present) is to transfer some of these adaptive randomization techniques to bandit/bayesian optimization, experimental design and reinforcement learning.
Selected work
GP/Bandit/Bayesian optimization
Near-linear time Gaussian process optimization with adaptive batching and resparsification [paper]
D Calandriello, L Carratino, A Lazaric, M Valko, L Rosasco International Conference on Machine Learning (ICML), 2020
Gaussian process optimization with adaptive sketching: Scalable and no regret [paper]
D Calandriello, L Carratino, A Lazaric, M Valko, L Rosasco, 32nd Annual Conference on Learning Theory (COLT), 2019
Sampling from a -DPP without looking at all items [paper]
D Calandriello, M Dereziński, M Valko. Advances in Neural Information Processing Systems (NeurIPS), 2020
M Derezinski, D Calandriello, M Valko, Advances in Neural Information Processing Systems (NeurIPS), 2019
Sketching for optimization and learning
Distributed adaptive sampling for kernel matrix approximation [paper]
D Calandriello, A Lazaric, M Valko. International Conference on Artificial Intelligence and Statistics (AI&STATS), 2017
Efficient second-order online kernel learning with adaptive embedding [paper]
D Calandriello, A Lazaric, M Valko. Advances in Neural Information Processing Systems (NeurIPS), 2017
A Rudi, D Calandriello, L Carratino, L Rosasco, Advances in Neural Information Processing Systems (NeurIPS), 2018
Improved large-scale graph learning through ridge spectral sparsification [paper]
D Calandriello, I Koutis, A Lazaric, M Valko. International Conference on Machine Learning (ICML), 2018
Efficient reinforcement learning
Sparse Multi-task Reinforcement Learning [paper]
D. Calandriello, A. Lazaric,M. Restelli. Advances in Neural Information Processing Systems (NeurIPS), 2014
Safe policy iteration [paper]
M Pirotta, M Restelli, A Pecorino, D Calandriello. International Conference on Machine Learning (ICML), 2013