Greetings to everyone at SOBIE 2025
Start here if you are new to the topic. Slides here.
Expectiles are novel and may be enigmatic to unfamiliar readers. To remedy this, we give nine interpretations for expectiles organized along different perspectives. An expectile is a minimizer of an asymmetric least squares criterion, making it a weighted average but also meaning that the expectile is the true mean of the distribution in two special cases. Expectiles summarize a distribution in a manner similar to quantiles, but also quantiles always are expectiles in location models and expectiles always are quantiles, albeit not of the original distribution. Expectiles are also m-estimators, m-quantiles, and Lp-quantiles, families containing the majority of simple statistics commonly in use.
I generalize the work of Gauss and Markov to support regression lines other than the mean---including quantile regression, for example.
The title is deceptive: the best linear unbiased estimator is always an expectile. Under arbitrary heteroscedasticity, some form of weighted expectile regression coefficients are always the best linear unbiased estimator if one exists at all.
This work constitutes a new unified theory of best linear unbiased estimators. After Ordinary Least Squares and Generalized Least Squares, Expectile Regression is the final frontier.
Replication code (Stata!) for the mortgage denial exercise is here.
The folder contains data and code to (1) estimate expectiles and (2) plot expectile coefficient plots as seen in the paper.
Expectile functions of random samples converge almost surely, uniformly, if and only if the random variable X has a finite first moment. So, Kolmogorov's strong law applies uniformly to all expectiles with no additional conditions.
We show that Kolmogorov's classical strong law of large numbers applies to all expectiles uniformly. The expectiles of a random sample converge almost surely (uniformly) to the true expectiles if and only if the true data generating process has a finite first moment. The result holds for expectile functions of scalar and vector-valued random variables and can be reformulated to state that the mean (or any expectile) of a random sample converges almost surely to the true mean (or expectile) if and only if an arbitrary expectile exists and is finite.
I have had a few inquiries about expectiles of vector-valued random variables in the context of multiple equation models, where quantile regression methods have been tricky. I would argue that vector expectiles should be well-known since Breckling and Chambers (1988) m-quantile paper. But this article may be the first to study their asymptotics directly. Fortunately, necessary and sufficient conditions for their convergence are easy to come by.
Well, I couldn't just do the strong one!
Generalizing the Weak Law of Large Numbers for the sample mean, we present necessary and sufficient conditions for convergence in probability of a sample expectile to a limiting sequence or to a constant. Convergence (in probability) of expectile functions to a limiting sequence is uniform whenever it occurs. And, though the mean or another expectile may converge to a constant in special cases where the distribution lacks a finite first moment, it is impossible for any two or more distinct expectiles to converge to constants unless a finite first moment exists. In that case, the Strong Law applies and convergence will be almost sure.
Khintchine's weak law is an obvious corollary. Chebyshev's weak law is not intended for random samples per se; but the proof I have given here can be adapted to that theorem trivially so that all expectiles converge, uniformly, under Chebyshev's conditions.
Okay, a third one.
We prove a uniform ergodic theorem for expectiles that implies the pointwise ergodic theorem in one special case. The converse is also true: we show that uniform convergence of expectile functions is implied by the pointwise ergodic theorem.
To borrow Hal White's analogy, ergodic processes are weakly dependent enough that the sample mean converges in the same way that it does in i.i.d. processes. It should not be surprising that the entire expectile function of a random variable has this property.
Econometric Reviews (2021, Vol 40, No.2).
Why? A lot of authors were saying something that seemed wrong. I noticed it, wrote a paper, and sent it in.
This article compares two asymmetric Gaussian likelihood models and their corresponding estimators. Recently, there has been confusion in the literature regarding these models and (1) whether they are the same, or (2) whether both of them can be used to estimate expectiles. After the comparison, it becomes clear that they are not the same and only one of these models is appropriate for that purpose. The similarity between these models is purely superficial. The historical origin of expectiles has also been disputed: some degree of credit can be shared between two papers.
Followup to "The MLE of Aigner, Amemiya, and Poirier is not the Expectile MLE."
Given that maximum likelihood methods for estimating expectiles have not been well-studied, together with the general confusion about what model might be able to do such a thing, I have undertaken a brief study of the mathematical properties of the generalized Gaussian probability distribution that can be used in expectile likelihood methods. As distributions go, the new AND has quite interesting properties.
We study a novel asymmetric normal distribution. In addition to its usefulness in estimating expectiles, the new distribution has attractive properties for approximating unknown distributions. Notably, this distribution can be used as a generalized Taylor approximation for distributions with density not twice-differentiable at the mode. Also, the new distribution has the maximum entropy of any distribution with support on the reals and asymmetric conditional variance. We present several other properties.
Written as a stepping-stone towards applied work with regularized or Bayesian estimators. Quasi-likelihoods that elicit expectiles have not been studied (see the issues raised in the papers above).
tl;dr: The only likelihoods that elicit expectiles under general conditions are those that can be parameterised exactly as in the asymmetric normal distribution I studied in the paper above. This leads to a set of estimators that replicate those proposed by Newey and Powell or my generalized expectile regression estimator, which is BLUE under heteroscedasticity.
There is a uniform Cramer-Rao theorem for expectile regression as well, with a lower bound on the efficiency of any unbiased estimator in on parametric model. Notably, this paper introduces an asymmetrically generalized central limit theorem that can be used for expectile regression estimators without assuming continuity of the underlying distribution. A note on how this CLT relates to quantile regression and median-stable distributions is forthcoming.
Written as a note for beginners and students. Very unpolished, but it may be helpful to those new to this area.
This article obtains expectile regression coefficients using three distinct methods: by minimizing an asymmetric least squares criterion, by maximizing a particular likelihood, and by imposing a simple moment condition. All three methods produce equivalent estimators. Thus, we find that the well-studied equivalence of OLS estimates (using the same three methods) extends to expectiles, the most popular variety of generalized quantiles.