Challenges, Best Practices and Pitfalls in Evaluating Results of Online Controlled Experiments

A/B Testing is the gold standard to estimate the causal relationship between a change in a product and its impact on key outcome measures. It is widely used in the industry to test changes ranging from simple copy change or UI change to more complex changes like using machine learning models to personalize user experience. The key aspect of A/B testing is evaluation of experiment results. Designing the right set of metrics - correct outcome measures, data quality indicators, guardrails that prevent harm to business, and a comprehensive set of supporting metrics to understand the “why” behind the key movements is the #1 challenge practitioners face when trying to scale their experimentation program. On the technical side, improving sensitivity of experiment metrics is a hard problem and an active research area, with large practical implications as more and more small and medium size businesses are trying to adopt A/B testing and suffer from insufficient power. In this tutorial we will discuss challenges, best practices, and pitfalls in evaluating experiment results, focusing on both lessons learned and practical guidelines as well as open research questions.

Presenters:

  • Xiaolin Shi, Snap Inc.
  • Somit Gupta, Microsoft.
  • Pavel Dmitriev, Outreach.
  • Xin Fu, Facebook.


Time:

  • Sunday, August 4, 2019
  • 8:00 am - 12: PM

Location:

  • Summit 9 - Ground Level, Egan Center


Slides: