Research

We focus on the design and analysis of machine learning and optimization algorithms. The two topics are very related: Machine learning is just a form of stochastic optimization. In particular, we focus on online learning, empirical processes, and stochastic algorithms for convex and non-convex domains.

Job Openings

  • PhD: We currently do not plan to hire more PhD students in the lab. Also, for lack of time, it is difficult to answer to the inquiries for PhD positions.

News

09/25/20 One NeurIPS'20 paper accepted!
06/27/20 One ICML'20 Workshop on Beyond First Order Methods in ML Systems paper accepted!
05/19/20 Francesco Orabona and Ashok Cutkosky will give a tutorial at ICML'20 on Parameter-free Online Optimization
01/24/20 One ICASSP'20 paper accepted!
10/02/19 One NeurIPS'19 Workshop on Meta-Learning paper accepted!
10/01/19 One NeurIPS'19 Workshop on Privacy in Machine Learning paper accepted!
09/03/19 Two NeurIPS'19 papers accepted!
07/27/19 NSF founded our project in collaboration with Omkant Pandey
04/21/19 One ICML'19 paper accepted!
04/18/19 One COLT'19 paper accepted!
12/22/18 One AISTATS'19 paper accepted!
09/01/18 We moved to Boston University
12/19/17 The website is up!

People

Francesco Orabona

Director of the lab, Assistant Professor

Zhenxun Zhuang

2nd year CS PhD Student

Xiaoyu Li

2nd year SE PhD Student

Keyi Chen

2nd year CS PhD Student

Nicolò Campolongo

Visiting PhD Student

Vibhu Bhatia

CS MS Student

Alumni

Kwang-Sung Jun

Post-doc 2018-2019, now at University of Arizona

Preprints

  • K. Chen, J. Langford, F. Orabona. Better Parameter-free Stochastic Optimization with ODE Updates for Coin-Betting. arXiv [PDF]

  • X. Li, Z. Zhuang, and F. Orabona. Exponential Step Sizes for Non-Convex Optimization. arXiv [PDF] [CODE]

  • F. Orabona. A Modern Introduction to Online Learning. arXiv [PDF]

Papers

(only from the OPTIMAL lab, Prof. Orabona's previous papers are on his website)

  • N. Campolongo and F. Orabona. Temporal Variability in Implicit Online Learning. NeurIPS 2020 [PDF]

  • X. Li and F. Orabona. A High Probability Analysis of Adaptive SGD with Momentum. ICML 2020 Workshop on Beyond First Order Methods in ML Systems. [PDF]

  • Z. Zhuang, Y. Wang, K. Yu, and S. Lu. No-regret Non-convex Online Meta-Learning. ICASSP 2020 [PDF]

  • K.-S. Jun and F. Orabona. Parameter-Free Locally Differentially Private Stochastic Subgradient Descent. NeurIPS 2019 Workshop on Privacy in Machine Learning [PDF]

  • Z. Zhuang, K. Yu, S. Lu, L. Glass, and Y. Wang. Online Meta-Learning on Non-convex Setting. NeurIPS 2019 Workshop on Meta-Learning [PDF]

  • A. Cutkosky and F. Orabona. Momentum-Based Variance Reduction in Non-Convex SGD. NeurIPS 2019 [PDF]

  • K.-S. Jun, A. Cutkosky, F. Orabona. Kernel Truncated Randomized Ridge Regression: Optimal Rates and Low Noise Acceleration. NeurIPS 2019 [PDF]

  • K.-S. Jun and F. Orabona. Parameter-free Online Convex Optimization with Sub-Exponential Noise. COLT 2019 [PDF]

  • Z. Zhuang, A. Cutkosky, F. Orabona. Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization. ICML 2019 [PDF] [CODE]

  • X. Li and F. Orabona. On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes. AISTATS 2019 [PDF]

  • A. Cutkosky and F. Orabona. Black-Box Reductions for Parameter-free Online Learning in Banach Spaces. COLT 2018 [PDF]

  • F. Orabona and D. Pal. Scale-free Online Learning, Theoretical Computer Science, 716. 2018 [PDF]

  • F. Orabona and T. Tommasi. Training Deep Networks without Learning Rates Through Coin Betting, NeurIPS 2017 [PDF] [CODE]

Sponsors

Our research is founded by