Ta-Wei (David) Huang
PhD Candidate in Marketing at Harvard Business School
My research integrates causal inference and machine learning to address methodological challenges and unintended consequences in customer value management and persoanlization. Before joining HBS, I worked in the industry as a data science manager and consultant. I am also a data science educator, having taught 4,000+ students online.
Topics: Targeting and Personalization, Customer Value Management, Customer Privacy
Methods: Causal Inference, Multitask and Transfer Learning, Representation Learning, Differential Privacy, Fair Machine Learning, Reinforcement Learning
I am on the academic job market in 2024. Please contact me via email at thuang@hbs.edu.
Research
Incrementality Representation Learning: Synergizing Past Experiments for Intervention Personalization
Ta-Wei Huang, Eva Ascarza, and Ayelet Israeli (2024)
Job Market Paper. Under Review. [Paper]
This paper introduces Incrementality Representation Learning (IRL), a novel multi-task representation learning framework that predicts the heterogeneous causal effects of different marketing interventions without extensive factorial experimentation. Applied to 274 consumer packaging goods promotional campaigns, we show that IRL outperforms traditional methods in targeting both tested and untested interventions and customer segments. We further develop a decision tool such that companies can enhance profitability by tailoring interventions across different customer segments based on insights from the IRL model.Doing More with Less: Overcoming Ineffective Long-term Targeting Using Short-Term Signals
Ta-Wei Huang and Eva Ascarza (2024)
Forthcoming at Marketing Science. [Paper] [Online Appendix]
This paper demonstrates and tackles the challenges of optimizing long-term business performance through targeted interventions. It highlights that relying solely on incrementality models constructed using long-term outcomes may fail to improve long-term outcomes, especially in contexts involving recurring customer interactions. The ineffectiveness arises from the accumulation of noisy behaviors over time that are not relevant to the treatment effect heterogeneity. To counter this, we propose using a surrogate index based on short-term outcomes to reduce the noise, coupled with a distinct separate imputation strategy for handling customer attrition. This approach shows improved effectiveness over current methods, as evidenced in both simulated and real-world applications.Debiasing Treatment Effect Estimation for Privacy-Protected Data: A Model Auditing and Calibration Approach
Ta-Wei Huang and Eva Ascarza (2023)
Revise & Resubmit at Management Science. [Paper]
Organizations are turning more towards data-driven targeted interventions using conditional average treatment effect (CATE) estimation. However, implementing Local Differential Privacy (LDP) for privacy protection can potentially reduce the accuracy of CATE models. This paper characterizes the bias and variance introduced in CATE estimation by LDP protection. To address this, we introduce a Model Auditing and Calibration technique. This approach iteratively refines CATE predictions without the need for denoising, thus preserving privacy while improving accuracy. Tested through simulations and real-world applications, our method surpasses existing techniques, providing organizations with a more precise and privacy-compliant solution for targeted interventions.