Wanning Chen

Hi! This is Wanning Chen. Thank you for visiting my website. I recently joined University of Washington as an assistant professor in the Information Systems group at Foster School of Business.
I completed my Ph.D. in business from Stanford University in 2022 (specifically, I was in the Operations, Information and Technology group at the Graduate School of Business), under the supervision of Professor Mohsen Bayati.
My research focuses on developing statistical methodology for data-driven decision making. In particular, I design novel machine learning algorithms for problems where the underlying data has a natural matrix structure.
Previously, I graduated magna cum laude from Pomona College in 2016 with a BA in Mathematics.
Email: wnchen (at) uw (dot) edu
Link to my CV


Working Papers

Learning to Recommend Using Non-Uniform Data

Joint work with Mohsen Bayati

Link to this paper

Learning user preferences for products based on their past purchases or reviews is at the cornerstone of modern recommendation engines. One complication in this learning task is that some users are more likely to purchase products or review them, and some products are more likely to be purchased or reviewed by the users. This non-uniform pattern degrades the power of many existing recommendation algorithms, as they assume that the observed data is sampled uniformly at random among user-product pairs. In addition, existing literature on modeling non-uniformity either assume user interests are independent of the products, or lack theoretical understanding. In this paper, we first model the user-product preferences as a partially observed matrix with non-uniform observation pattern. Next, building on the literature about low-rank matrix estimation, we introduce a new weighted trace-norm penalized regression to predict unobserved values of the matrix. We then prove an upper bound for the prediction error of our proposed approach. Our upper bound is a function of a number of parameters that are based on a certain weight matrix that depends on the joint distribution of users and products. Utilizing this observation, we introduce a new optimization problem to select a weight matrix that minimizes the upper bound on the prediction error. The final product is a new estimator, NU-Recommend, that outperforms existing methods in both synthetic and real datasets.

Presented at INFORMS 2020, MSOM 2021, CORS 2021, Cornell ORIE Young Researchers Workshop 2021, INFORMS 2021.

A 20-min recording is available at this link. Please have a look if you are interested!

Speed Up the Cold-Start Learning in Two-Sided Bandits with Many Arms

Joint work with Mohsen Bayati and Junyu Cao

Link to this paper

Multi-armed bandit (MAB) algorithms are efficient approaches to reduce the opportunity cost of online experimentation and are used by companies to find the best product from periodically refreshed product catalogs. However, these algorithms face the so-called cold-start at the onset of the experiment due to a lack of knowledge of customer preferences for new products, requiring an initial data collection phase known as the burn-in period. During this period, MAB algorithms operate like randomized experiments, incurring large burning costs which scale with the large number of products. We attempt to reduce the burning by identifying that many products can be cast into two-sided products, and then naturally model the rewards of the products with a matrix, whose rows and columns represent the two sides respectively. Next, we design two-phase bandit algorithms that first use subsampling and low-rank matrix estimation to obtain a substantially smaller targeted set of products and then apply a UCB procedure on the target products to find the best one. We theoretically show that the proposed algorithms lower costs and expedite the experiment in cases when there is limited experimentation time along with a large product set. Our analysis also reveals three regimes of long, short, and ultra-short horizon experiments, depending on dimensions of the matrix. Empirical evidence from both synthetic data and a real-world dataset on music streaming services validates this superior performance.

Presented at MIW 2022, RMP 2022, INFORMS 2022

Synthetic Control with Non-uniform Panel Data

Joint work with Mohsen Bayati

Teaching Experience

Teaching Assistant

OIT 367 (MBA core), Business Intelligence from Big Data, Winter 2019, Winter 2020, Graduate School of Business, Stanford University.

OIT 604 (PhD seminar), Data, Learning, and Decision-Making, Spring 2019, Spring 2020, Graduate School of Business, Stanford University.

Math 187, Operations Research, Spring 2016, Pomona College.

Math 113, Number Theory & Cryptography, Spring 2016, Pomona College.

Math 151, Spring 2014, Probability, Pomona College.

Math 60, Linear Algebra, Spring 2013, Spring 2015, Pomona College.