Educational A/B Testing at Scale

The Second Virtual Workshop at Learning @ Scale 2021

June 22, 2021

16:00-19:00 (CEST) Central European Summer Time
(10:00am - 1:00pm ET, 7:00am - 10:00am PT )

Zoom (link below)

Submission URL (see details below): https://easychair.org/conferences/?conf=eduabtestatscale2021

Date, Time, & Zoom Link

June 22, 2021 at 16:00 - 19:00 CEST (Central European Summer Time), aka 10:00am - 1:00pm ET, 7:00am - 10:00am PT in USA

Zoom Link: https://zoom.us/j/96408696710?pwd=OVJCVnRHQmVrbWUxRlVzQkVsTys5QT09

Register here for Learning @ Scale 2021 (including workshops).

Agenda & Accepted Papers

10:00 ET/16:00 CST Welcome

10:05/16:05 – Paper: Savi et al., Adaptive Learning Systems and Interference in Causal Inference

10:25/16:25 – Paper: Murphy et al., A Progress Report & Roadmap for A/B Testing at Scale with UpGrade

10:45/16:45 – Paper: Lomas et al., ReSource: A proposed lightweight content management system to facilitate A/B tests in education software

11:05/17:05 – Heffernan, ETRAILS

11:15/17:15 – general discussion

11:30/17:30 - break

11:45/17:45 – breakout groups

12:15/18:15 – report outs from breakout groups

12:45/18:45 – general discussion


Savi et al., Adaptive Learning Systems and Interference in Causal Inference

Murphy et al., A Progress Report & Roadmap for A/B Testing at Scale with UpGrade

Lomas et al., ReSource: A proposed lightweight content management system to facilitate A/B tests in education software

Call for Papers / Workshop Background

There is no simple path that will take us immediately from the contemporary amateurism of the college to the professional design of learning environments and learning experiences. The most important step is to find a place on campus for a team of individuals who are professionals in the design of learning environments — learning engineers, if you will. - Herbert Simon

The emerging discipline of Learning Engineering is focused on putting into place tools and processes that use the science of learning as a basis for improving educational outcomes [2]. An important part of Learning Engineering focuses on improving the effectiveness of educational software. In many software domains, A/B testing has become a prominent technique to achieve the software’s goals [3], but educational software tends to lag other fields in the use of A/B testing, particularly at scale. This workshop will explore ways in which A/B testing in educational contexts differs from other domains and proposals to overcome these challenges so that A/B testing can become a more useful tool in the learning engineer’s toolbox.

We invite papers (up to 4 pages in CHI Proceedings format) addressing issues with conducting A/B tests and random assignment experiments at scale, including those addressing:

  • managing unit of assignment issues

  • measurement, including both short and long-term outcomes

  • practical considerations related to experimenting in school settings, MOOCs, & other contexts

  • ethical and privacy issues

  • relating experimental results to learning-science principles

  • understanding use cases (core, supplemental, in-school, out-of-school, etc.)

  • accounting for aptitude-treatment interactions

  • A/B testing within adaptive software

  • adaptive experimentation

  • attrition and dropout

  • stopping criteria

  • user experience issues

  • educator involvement and public perceptions of experimentation

  • balancing practical improvements with generalizable science

We welcome participation from researchers and practitioners who have either practical or theoretical experience related to running A/B tests and/or randomized trials. This may include researchers with backgrounds in learning science, computer science, economics and/or statistics.

The submission deadline is June 11, 2021.

References

[1] Herbert A. Simon. 1967. The job of a college president. Educational Record, 48, 68-78.

[2] Melina R. Uncapher (2018): From the science of learning (and development) to learning engineering, Applied Developmental Science, https://doi.org/10.1080/10888691.2017.1421437

[3] Kohavi, R., Deng, A., Frasca, B., Walker, T., Xu, Y., & Pohlmann, N. (2013, August). Online controlled experiments at large scale. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1168-1176).

Submission Details

Submission Type: 4 page PDFs in CHI / ACM format (Word, LaTeX, Overleaf). References are not included in page limit.

Submission URL: https://easychair.org/conferences/?conf=eduabtestatscale2021

Submission Deadline: June 11, 2021

Organizers

  • Steve Ritter, Carnegie Learning

  • Neil Heffernan, Worcester Polytechnic Institute

  • Joseph Jay Williams, University of Toronto

  • Klinton Bicknell, Duolingo

  • Derek Lomas, Delft University of Technology

Program Committee

  • Klinton Bicknell, Duolingo

  • Ryan Emberling, ASSISTments Foundation

  • Stephen Fancsali, Carnegie Learning

  • Derek Lomas, Delft University of Technology

  • April Murphy, Carnegie Learning

  • Korinn Ostrow, Worcester Polytechnic Institute

  • Nirmal Patel, PlayPower Labs

  • Steve Ritter, Carnegie Learning