Working Papers
Strategic Personalization: Policy Design and Evaluation under Multiple Instruments
Fatemeh Gheshlaghpour, Sanjog Misra, and Pradeep K. Chintagunta (Job Market Paper)
Firms often face the challenge of deciding which instruments to personalize in order to maximize objectives such as short-run profits. This paper studies the policy selection problem across personalization schedules that integrate high-dimensional discrete and continuous instruments. We estimate heterogeneous treatment effects using a structured deep neural network augmented with a constraint layer that enforces economically motivated demand restrictions (e.g., monotonicity in price). To evaluate candidate policies across personalization schedules, we present model-based policy evaluations and complement them with an off-policy evaluation (OPE) approach for mixed discrete-continuous treatments. Our OPE method uses neighbor matching in the discrete space and smoothness and monotonicity assumptions in the continuous dimension to construct bounds on expected profit.
We apply the framework to a large-scale direct-mail field experiment conducted by a consumer lender that jointly randomized advertising content and interest rates. In model-based policy evaluation, personalizing advertising content captures nearly all gains from full personalization: expected profit per unit loan amount increases by $31.2\%$ relative to the optimal uniform policy, compared with $31.4\%$ under full personalization, while interest-rate personalization alone yields negligible gains. Our OPE bounds place the model-based estimates within their feasible ranges and provide a credibility check for the ranking: comparing interval midpoints, content-only personalization increases profit by $86.8\%$ and full personalization by $110.8\%$ relative to the uniform policy, although the intervals overlap. Overall, the results suggest that personalizing demand shifters such as advertising content delivers most of the benefits of personalization while mitigating the regulatory and reputational risks associated with personalized pricing.
The Impact of TV Advertising Content on Sales: Generalizable Results from 196 Brands
Fatemeh Gheshlaghpour, Nima Jalali, Pradeep K. Chintagunta, and Purushottam Papatla
Despite roughly $60B in 2024 U.S. TV ad spend, canonical studies report small average advertising elasticities, raising the question of why firms continue to invest heavily in television. We examine whether creative content, specifically, the emotions ads evoke, helps explain variation in sales response beyond spend alone. Using a large-scale dataset linking weekly retail sales (Nielsen RMS and Ad Intel) with iSpot.tv measures of viewer-reported feelings for 6,283 campaigns across 196 brands in 16 categories, we provide one of the first empirical assessments of how ad-evoked emotions relate to sales outcomes. To handle overlapping campaigns, we introduce the epoch, a DMA-specific period during which multiple campaigns for a brand air concurrently or retain adstock, allowing campaign-level inference despite dense advertising schedules. Within each category, we factor-analyze 57 distinct emotions, score campaigns on category-specific feeling factors, and estimate how these factors moderate advertising elasticities.
We find that ad-evoked feelings are systematically related to sales elasticities across categories: both positive and negative emotions influence performance in distinct, category-specific ways. For example, in beer, "Energetic/Exciting/Arresting" and "Heartfelt/Inspiring/Narrative" factors are associated with higher elasticities, whereas "Soothing/Cinematic/Colorful" is negative. In cereal, "Ingenious/Cute/Funny" increases elasticity, while "Dishonest/Risque/Irksome" reduces it. Our findings suggest that what ads make viewers feel explains heterogeneity in advertising effectiveness, clarifying why firms continue to invest in creative advertising. Methodologically, we introduce epoch-based measurement for overlapping campaigns and employ scalable viewer-level emotion data in place of small coder panels, offering a general framework for quantifying content effects at scale.