Research

We focus on the design and analysis of machine learning and optimization algorithms. The two topics are very related: Machine learning is just a form of stochastic optimization. In particular, we focus on online learning, empirical processes, and stochastic algorithms for convex and non-convex domains.

Job Openings

  • PhD: We currently do not plan to hire more PhD students in the lab. Hence, generic inquiries for PhD positions will not be answered.

News

01/09/22 Three ALT'22 papers accepted!
12/01/21 One AAAI'22 paper accepted!

09/28/21 One NeurIPS'21 paper accepted!
05/08/21 Two ICML'21 papers accepted!
03/01/21 One paper published at the Annual Review of Statistics and Its Application journal
02/02/21 Francesco Orabona has been awarded the NSF CAREER

09/25/20 One NeurIPS'20 paper accepted!
06/27/20 One ICML'20 Workshop on Beyond First Order Methods in ML Systems paper accepted!
05/19/20 Francesco Orabona and Ashok Cutkosky will give a tutorial at ICML'20 on Parameter-free Online Optimization
01/24/20 One ICASSP'20 paper accepted!
10/02/19 One NeurIPS'19 Workshop on Meta-Learning paper accepted!
10/01/19 One NeurIPS'19 Workshop on Privacy in Machine Learning paper accepted!
09/03/19 Two NeurIPS'19 papers accepted!
07/27/19 NSF founded our project in collaboration with Omkant Pandey
04/21/19 One ICML'19 paper accepted!
04/18/19 One COLT'19 paper accepted!
12/22/18 One AISTATS'19 paper accepted!
09/01/18 We moved to Boston University
12/19/17 The website is up!

People

Francesco Orabona

Director of the lab, Associate Professor

Zhenxun Zhuang

4th year CS PhD Student

Xiaoyu Li

4th year SE PhD Student

Keyi Chen

4th year CS PhD Student

Vibhu Bhatia

CS MS Student

Alumni

Kwang-Sung Jun

Post-doc 2018-2019, now Assistant Professor at the University of Arizona

Mingrui Liu

Post-doc 2020-2021, now Assistant Professor at George Mason University

Nicolò Campolongo

Visiting PhD Student 2020

Preprints

  • F. Orabona. A Modern Introduction to Online Learning. arXiv [PDF]

Papers

(only from the OPTIMAL lab, Prof. Orabona's previous papers are on his website)

  • K. Chen, J. Langford and F. Orabona. Better Parameter-free Stochastic Optimization with ODE Updates for Coin-Betting. AAAI 2022 [PDF]

  • J. Negrea, B. Bilodeau, N. Campolongo, F. Orabona, and Daniel M. Roy. Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers. NeurIPS 2021 [PDF]

  • X. Li, Z. Zhuang, and F. Orabona. A Second look at Exponential and Cosine Step Sizes: Simplicity, Adaptivity, and Performance. ICML 2021 [PDF] [CODE]

  • G. Flaspohler, F. Oraboba, J. Cohen, S. Mouatadid, M. Oprescu, P. Orenstein, and L. Mackey. Online Learning with Optimism and Delay. ICML 2021

  • N. Cesa-Bianchi and F. Orabona. Online Learning Algorithms. Annual Review of Statistics and Its Application. 2021 [PDF]

  • N. Campolongo and F. Orabona. Temporal Variability in Implicit Online Learning. NeurIPS 2020 [PDF]

  • X. Li and F. Orabona. A High Probability Analysis of Adaptive SGD with Momentum. ICML 2020 Workshop on Beyond First Order Methods in ML Systems. [PDF]

  • Z. Zhuang, Y. Wang, K. Yu, and S. Lu. No-regret Non-convex Online Meta-Learning. ICASSP 2020 [PDF]

  • K.-S. Jun and F. Orabona. Parameter-Free Locally Differentially Private Stochastic Subgradient Descent. NeurIPS 2019 Workshop on Privacy in Machine Learning [PDF]

  • Z. Zhuang, K. Yu, S. Lu, L. Glass, and Y. Wang. Online Meta-Learning on Non-convex Setting. NeurIPS 2019 Workshop on Meta-Learning [PDF]

  • A. Cutkosky and F. Orabona. Momentum-Based Variance Reduction in Non-Convex SGD. NeurIPS 2019 [PDF]

  • K.-S. Jun, A. Cutkosky, F. Orabona. Kernel Truncated Randomized Ridge Regression: Optimal Rates and Low Noise Acceleration. NeurIPS 2019 [PDF]

  • K.-S. Jun and F. Orabona. Parameter-free Online Convex Optimization with Sub-Exponential Noise. COLT 2019 [PDF]

  • Z. Zhuang, A. Cutkosky, F. Orabona. Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization. ICML 2019 [PDF] [CODE]

  • X. Li and F. Orabona. On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes. AISTATS 2019 [PDF]

  • A. Cutkosky and F. Orabona. Black-Box Reductions for Parameter-free Online Learning in Banach Spaces. COLT 2018 [PDF]

  • F. Orabona and D. Pal. Scale-free Online Learning, Theoretical Computer Science, 716. 2018 [PDF]

  • F. Orabona and T. Tommasi. Training Deep Networks without Learning Rates Through Coin Betting, NeurIPS 2017 [PDF] [CODE]

Sponsors

Our research is founded by