Qiaochu Wang, Yan Huang, Param Vir Singh, and Kannan Srinivasan. Algorithms, Artificial Intelligence, and Simple Rule-Based Pricing. [Best Student Paper Award Nominee, CIST 2022]
Automated pricing strategies in e-commerce can be broadly categorized into two forms - simple rule-based such as undercutting the lowest price, and more sophisticated artificial intelligence (AI) powered algorithms, such as reinforcement learning (RL) based strategies. Although simple rule-based pricing remains the most widely used strategy, a few retailers have adopted pricing algorithms powered by AI. RL algorithms are particularly appealing for pricing due to their abilities to autonomously learn an optimal policy and adapt to changes in competitors' pricing strategies and market environment. Despite the common belief that RL algorithms hold a significant advantage over rule-based strategies, our extensive pricing experiments demonstrate that when competing against RL pricing algorithms, simple rule-based algorithms may result in higher prices and benefit both sellers, compared to scenarios where multiple RL algorithms compete against each other. To validate our findings, we estimate a non-sequential structural demand model using individual-level data from a large e-commerce platform and conduct counterfactual simulations. The results show that in a real-world demand environment, simple rule-based algorithms outperform RL algorithms when facing other RL competitors. Our research sheds new light on the effectiveness of automated pricing algorithms and their interactions in competitive markets, and provides practical insights for retailers in selecting the appropriate pricing strategies.
Qiaochu Wang, Yan Huang, and Param Vir Singh. Wrong Model or Wrong Practices? The Impact of Demand Model Mis-specification on Bias Estimation in Personalized Pricing.
The societal significance of fair machine learning (ML) cannot be overstated, yet quantifying algorithmic bias and ensuring fair ML remains a challenging task. One popular fair ML objective, equality of opportunity, requires equal treatment for individuals who are equally deserving, regardless of their group affiliation. However, determining who should be considered "equally deserving" is a complex and critical aspect that directly affects the estimation of algorithmic bias. This paper emphasizes the importance of accurately measuring equal deservingness in order to accurately estimate algorithmic bias. To illustrate this, the paper examines the case of personalized pricing and shows that assuming a mis-specified model for equal deservingness can result in incorrect bias estimates. Using a detailed consumer data set from a large e-commerce platform, the paper demonstrates that when the correct consumer demand model is a non-sequential search model where consumers differ in their search costs based on gender, assuming a standard choice demand model or a traditional ML (e.g. Support Vector Machine) can lead to incorrect bias estimates. This research highlights the critical role that a proper model specification plays in achieving fair ML practices.
Qiaochu Wang, Yan Huang, Stefanus Jasin, and Param Vir Singh. Algorithmic Transparency with Strategic Users. Management Science Vol. 69, No. 4, April 2023, pp. 2297–2317 [Best Student Paper Award, CIST 2019]
Should firms that apply machine learning algorithms in their decision--making make their algorithms transparent to the users they affect? Despite growing calls for algorithmic transparency, most firms have kept their algorithms opaque, citing potential gaming by users that may negatively affect the algorithm's predictive power. We develop an analytical model to compare firm and user surplus with and without algorithmic transparency in the presence of strategic users and present novel insights. We identify a broad set of conditions under which making the algorithm transparent benefits the firm. We show that, in some cases, even the predictive power of machine learning algorithms may increase if the firm makes them transparent. By contrast, users may not always be better off under algorithmic transparency. The results hold even when the predictive power of the opaque algorithm comes largely from correlational features and the cost for users to improve on them is close to zero. Overall, our results show that firms should not view manipulation by users as bad. Rather, they should use algorithmic transparency as a lever to motivate users to invest in more desirable features.
Qiaochu Wang, Yan Huang, and Param Vir Singh. Algorithmic Lending, Competition, and Strategic Information Disclosure. Major revision at Marketing Science.
We investigate how competition affects a firm's decision to reveal its machine learning (ML) algorithm in the context of financial lending. Financial lenders use ML algorithms in their underwriting processes to predict credit risk for potential borrowers. The ML algorithms are hidden from the consumers who are affected by their decisions. As a result, financial intermediaries, such as Credit Karma, have emerged who provide personalized odds of approval for a financial product to a user by reverse engineering a lender's credit approval algorithm. We show that a financial lender can successfully compete by revealing its otherwise secret algorithm to the intermediary. Specifically, competition among lenders leads to an asymmetric equilibrium in algorithm revelation in a duopoly -- one lender chooses to reveal its algorithm to the intermediary while the other lender chooses not to. Compared to the symmetric non-reveal case, in the asymmetric equilibrium, the competition faced by the revealing lender softens while that faced by the non-revealing lender remains largely unchanged. The non-revealing lender does not deviate to reveal its algorithm, as the competition among the lenders is the most intense when both the lenders reveal their algorithms. Interestingly, the asymmetric equilibrium is the only sub-game perfect Nash equilibrium when there is no gaming by consumers. Counter to the arguments made by firms highlighting competition as the reason for not revealing their algorithms, our analysis shows that competition could be the driving force for some firms to reveal their algorithms without any policy intervention in the financial lending scenario.