Abstract: This paper studies human preference learning based on partially revealed choice behavior and formulates the problem as a generalized Bradley–Terry–Luce (BTL) ranking model that accounts for heterogeneous preferences. Specifically, we assume that each user is associated with a nonparametric preference function, and each item is characterized by a low-dimensional latent feature vector — their interaction defines the underlying low-rank score matrix. In this formulation, we propose an indirect regularization method for collaboratively learning the score matrix, which ensures entrywise $\ell_\infty$-norm error control — a novel contribution to the heterogeneous preference learning literature. This technique is based on sieve approximation and can be extended to a broader class of binary choice models where a smooth link function is adopted. In addition, by applying a single step of the Newton–Raphson method, we debias the regularized estimator and establish uncertainty quantification for item scores and rankings of items, both for the aggregated and individual preferences. Extensive simulation results from synthetic and real datasets corroborate our theoretical findings.
Abstract: This paper studies the inference about linear functionals of high-dimensional low-rank matrices. While most existing inference methods would require consistent estimation of the true rank, our procedure is robust to rank misspecification, making it a promising approach in applications where rank estimation can be unreliable. We estimate the low-rank spaces using pre-specified weighting matrices, known as diversified projections. A novel statistical insight is that, unlike the usual statistical wisdom that overfitting mainly introduces additional variances, the over-estimated low-rank space also gives rise to a non-negligible bias due to an implicit ridge-type regularization. We develop a new inference procedure and show that the central limit theorem holds as long as the pre-specified rank is no smaller than the true rank. In one of our applications, we study multiple testing with incomplete data in the presence of confounding factors and show that our method remains valid as long as the number of controlled confounding factors is at least as large as the true number, even when no confounding factors are present.
Abstract: This paper studies the inferential theory for estimating low-rank matrices. It also provides an inference method for the average treatment effect as an application. We show that the least square estimation of eigenvectors following the nuclear norm penalization attains the asymptotic normality. The key contribution of our method is that it does not require sample splitting. In addition, this paper allows dependent observation patterns and heterogeneous observation probabilities. Empirically, we apply the proposed procedure to estimating the impact of the presidential vote on allocating the U.S. federal budget to the states.
Abstract: We study priority-based matching markets with public and private endowments. We propose a novel partial order for comparing matching mechanisms in terms of their “fairness.” Using this order, we show that efficiency-adjusted deferred acceptance (EADA) is justified-envy minimal in the class of efficient mechanisms, while top trading cycles (TTC) and other popular mechanisms are not. Our findings highlight EADA as an interesting alternative to TTC in the context of transplantation-organ markets. Restricting attention to strategyproof mechanisms, we show that TTC is justified-envy minimal, providing robustness to the result of Abdulkadiroğlu et al. (2020).