Matthias Poloczek, Amazon
Title: Scalable High-dimensional Bayesian Optimization
Abstract: Bayesian optimization has become a powerful method for the sample-efficient optimization of expensive black-box functions. These functions do not have a closed-form and are evaluated for example by running a complex simulation of a marketplace, by a physical experiment in the lab or in a market, or by a CFD simulation. Use cases arise in machine learning, e.g., when tuning the configuration of an ML model or when optimizing a reinforcement learning policy. Many of these applications are high-dimensional, i.e., the number of tunable parameters exceeds 20, and thus difficult for current approaches due to the curses of dimensionality and the heterogeneity of the underlying functions. Of particular interest are constrained settings, where we are looking for a solution that satisfies inequality constraints of the form c(x) <= 0 and is globally optimal for the objective function among all feasible solutions. These constrained problems are particularly challenging because the sets of feasible points are often small and non-convex. Due to the lack of sample efficient methods, practitioners usually fall back to evolutionary strategies or heuristics.
In this talk I will start with a brief introduction to Bayesian optimization and then present the trust region Bayesian optimization algorithm (TuRBO) that addresses the above challenges via a local surrogate and a suitable sampling strategy. Then we will turn our attention to optimization under expensive black-box constraints and introduce the scalable constrained Bayesian optimization algorithm (SCBO). I will show comprehensive experimental results that demonstrate that TuRBO and SCBO achieve excellent results and outperform the state-of-the-art methods.
References:A Tutorial on Bayesian Optimization
A Framework for Bayesian Optimization in Embedded Subspaces
Scalable Global Optimization via Local Bayesian Optimization
Scalable Constrained Bayesian Optimization