Google Scholar, Researchgate, SSRN, arXiv
^: Undergraduate students I collaborated with.
Google Scholar, Researchgate, SSRN, arXiv
^: Undergraduate students I collaborated with.
6. Liangyu Ding^, Chenghan Wu^, Guokai Li, and Zizhuo Wang, “Learning to Price Bundles: A GCN Approach for Mixed Bundling”, in progress.
Bundle pricing refers to designing several product combinations (i.e., bundles) and determining their prices in order to maximize the expected profit. It is a classic problem in revenue management and arises in many industries, such as e-commerce, tourism, and video games. However, the problem is typically intractable due to the exponential number of candidate bundles. In this paper, we explore the usage of graph convolutional networks (GCNs) in solving the bundle pricing problem. Specifically, we first develop a graph representation of the mixed bundling model (where every possible bundle is assigned with a specific price) and then train a GCN to learn the latent patterns of optimal bundles. Based on the trained GCN, we propose two inference strategies to derive high-quality feasible solutions. A local-search technique is further proposed to improve the solution quality. Numerical experiments validate the effectiveness and efficiency of our proposed GCN-based framework. Using a GCN trained on instances with 5 products, our methods consistently achieve near-optimal solutions (better than 97%) with only a fraction of computational time for problems of small to medium size. It also achieves superior solutions for larger size of problems compared with other heuristic methods such as bundle size pricing (BSP). The method can also provide high quality solutions for instances with more than 30 products even for the challenging cases where product utilities are non-additive.
5. Guokai Li, Pin Gao, Stefanus Jasin, and Zizhuo Wang, “From Small to Large: A Graph Convolutional Network Approach for Solving Assortment Optimization Problems”, under review.
TL;DR: Use a graph neural network to derive high-quality solutions to large-scale constrained assortment problems within seconds.
Assortment optimization seeks to select a subset of substitutable products, subject to constraints, to maximize expected revenue. The problem is NP-hard due to its combinatorial and nonlinear nature and arises frequently in industries such as e-commerce, where platforms must solve thousands of such problems each minute. We propose a graph convolutional network (GCN) framework to efficiently solve constrained assortment optimization problems. Our approach constructs a graph representation of the problem, trains a GCN to learn the mapping from problem parameters to optimal assortments, and develops three inference policies based on the GCN's output. Owing to the GCN's ability to generalize across instance sizes, patterns learned from small-scale samples can be transferred to large-scale problems. Numerical experiments show that a GCN trained on instances with 20 products achieves over 85% of the optimal revenue on problems with up to 2,000 products within seconds, outperforming existing heuristics in both accuracy and efficiency. We further extend the framework to settings with an unknown choice model using transaction data and demonstrate similar performance and scalability.
4. Guokai Li, Zizhuo Wang, and Jingwei Zhang, “Infrequent Resolving Algorithm for Online Linear Programming”, minor revision at Mathematics of Operations Research.
Preliminary version accepted by WINE 2024: Proceedings of the 20th Conference on Web and Internet Economics.
Winner, Best Student Paper Award, Data Science and Decision Intelligence Community of Operations Research Society of China, 2024
TL;DR: With very few resolvings, we can achieve constant regret (even without the non-degeneracy assumption) for OLP and NRM problems.
Online linear programming (OLP) has gained significant attention from both researchers and practitioners due to its extensive applications, such as online auction, network revenue management, order fulfillment and advertising. Existing OLP algorithms fall into two categories: LP-based algorithms and LP-free algorithms. The former one typically guarantees better performance, even offering a constant regret, but requires solving a large number of LPs, which could be computationally expensive. In contrast, LP-free algorithm only requires first-order computations but induces a worse performance, lacking a constant regret bound. In this work, we bridge the gap between these two extremes by proposing a well-performing algorithm, that solves LPs at a few selected time points and conducts first-order computations at other time points. Specifically, for the case where the inputs are drawn from an unknown finite-support distribution, the proposed algorithm achieves a constant regret (even for the hard 'degenerate' case) while solving LPs only O(log log T) times over the time horizon T. Moreover, when we are allowed to solve LPs only M times, we design the corresponding schedule such that the proposed algorithm can guarantee a nearly O(T^{(1/2)^{M-1}}) regret. Our work highlights the value of resolving both at the beginning and the end of the selling horizon, and provides a novel framework to prove the performance guarantee of the proposed policy under different infrequent resolving schedules. Furthermore, when the arrival probabilities are known at the beginning, our algorithm can guarantee a constant regret by solving LPs O(log log T) times, and a nearly O(T^{(1/2)^{M}}) regret by solving LPs only M times. Numerical experiments are conducted to demonstrate the efficiency of the proposed algorithms.
3. Guokai Li, Pin Gao, and Zizhuo Wang, “When to Push Ads: Optimal Mobile Ad Campaign Strategy under Markov Customer Dynamics”, major revision at Manufacturing & Service Operations Management.
Preliminary version accepted by WINE 2024: Proceedings of the 20th Conference on Web and Internet Economics.
Finalist, Best Student Paper Competition, POMS-HK, 2024.
TL;DR: Use mobile ads to impact customers' engagement, which is a partially observed Markovian state, to boost revenue.
We investigate a seller's optimal advertising campaign strategy targeting customers who interact with the seller over time. We model customers' engagement by a continuous-time Markov chain with two states, active and inactive. While in the active state, customers make purchases according to a Poisson process, with each purchase yielding a certain reward; in contrast, customers in the inactive state make no purchases. The seller can use advertisements to activate customers and the activation probability may be affected by customers' fatigue to promotion. The objective of the seller is to maximize the long-term average expected profit by designing an optimal ad campaign strategy based on customers' purchase histories. For a single customer without budget constraints, we demonstrate the optimality of a triple-threshold policy based on the elapsed time since the last purchase or ad campaign. Moreover, we find that the seller tends to push ads earlier to customers with high purchase rates and low recapture rates (i.e., low transition rates from an inactive state to an active state). Surprisingly, we also find that the seller should push ads earlier to customers with intermediate churn rates (i.e., intermediate transition rates from an active state to an inactive state) compared to those with small or large churn rates. When confronted with a budget constraint on the overall ad cost, we suggest first solving a relaxed problem and then implementing a separate triple-threshold policy for each customer with a specified budget. We establish the asymptotic optimality of this policy and show that the asymptotic loss order is O(\sqrt{\log T /T}) for the general problem and O(1/\sqrt{T}) for a special case. In addition, we explore the case with strategic customer reactions. Overall, our work reveals important insights of the optimal ad campaign problem and provides a useful strategy in practice.
2. Guokai Li, Ningyuan Chen, Guillermo Gallego, Pin Gao, and Steven Kou, “Dealership or Marketplace with Fulfillment Services: A Dynamic Comparison”, Manufacturing & Service Operations Management, 2024. [SSRN]
TL;DR: Compare two widely adopted two-sided market business models, dealership and marketplace, by multi-period dynamic models.
Problem Definition: We consider two business models for a two-sided economy under uncertainty: dealership and marketplace with fulfillment services. Although both business models can bridge the gap between demand and supply, it is not clear which model is better for the firm or for the consumers. Methodology/Results: We show that while the two models differ substantially in pricing power, inventory risk, fee structure, and fulfillment time, both models share several important features, with the revenues earned by the firm from the two models converging when the markets are thick. We also show that for thick markets there is a one-to-one mapping between their corresponding optimal policies. Managerial implications: Our results provide guidelines for firms entering two-sided markets: when the market is thick, the two business models are similar; when the market is thin, they should carefully inspect a number of market conditions before making the choice.
1. Guokai Li, Pin Gao, and Zizhuo Wang, “Optimal Emission Regulation under Market Uncertainty”, Naval Research Logistics, 2024. [SSRN]
TL;DR: Compare the emission tax regulation and the emission permit regulation when the market demand is uncertain.
Government regulations on emission control can be broadly divided into two categories: price instruments and quantity instruments. In this paper, we develop a stylized model to compare the two instruments in the presence of market uncertainty. We find that when the emission intensity (i.e., emissions per unit of production) and the market uncertainty are both high or low, the expected social welfare under the price instruments will be higher; otherwise, the performance of the quantity instruments is comparatively better. The results are robust when incorporating firm competition and national/regional pollution damage. Afterward, we demonstrate that the government's quick-response capability or a hybrid of the price and quantity instruments can improve the expected social welfare, especially for high-emitting industries when the market uncertainty is intermediate. Lastly, for heterogeneous firms, we find that allowing permit trading in the quantity instrument may not be beneficial when pollution from each firm is more likely to have regional effects.