Quantile Regression with a Strictly Convex Objective
Xiaojia Guo (Robert H. Smith School of Business, UMD), Kenneth C. Lichtendahl (Google LLC)
Firms often use machine learning algorithms to produce point forecasts of a quantity of interest given some covariates. To quantify uncertainty around such a point forecast, the need for quantile regression arises. Some algorithms, such as extreme gradient boosting, allow for a customized objective (or loss) function and require the function to be strictly convex. These algorithms use Newton's method to nd point forecasts by minimizing a strictly convex loss function. In this paper, we introduce a strictly convex loss function for quantiles that can be used in these algorithms. The loss function we propose is a (smooth) convexication of any member of the class of proper scoring rule for a quantile, such as the pinball loss function. When used as an objective to nd the quantile for a specic probability, this loss function has a tuning parameter that is a function of the specic probability. This tuning parameter allows the user to estimate a quantile so that it is unbiased. In other words, it leads to prediction intervals with the right coverage. We demonstrate the use of our loss function on several publicly available datasets. The results suggest that quantiles estimated by extreme gradient boosting using our convexied loss function may be more accurate than those estimated by other leading methods.
(Work in progress)