A/B Testing and Platform-Enabled Learning Research
The Fifth Annual Workshop at Learning @ Scale 2024
July 20, 2024, 9am-5pm
Global Learning Center, Georgia Tech
Submission URL - https://easychair.org/my/conference?conf=pele2024
July 20, 2024, 9am-5pm
Global Learning Center, Georgia Tech
Submission URL - https://easychair.org/my/conference?conf=pele2024
Fifth Annual Workshop on A/B Testing and Platform-Enabled Learning Research (PELE)
July 20, 2024, 9am to 5pm
Global Learning Center, Georgia Tech, Atlanta, GA, USA
Submission Types:
Research papers: 8-10 page PDFs in CHI / ACM format (Word, LaTeX, Overleaf). References are not included in page limit.
Work-in-Progress & Demo papers: up to 4 page PDFs in CHI / ACM format (Word, LaTeX, Overleaf). References are not included in page limit.
Submission URL: https://easychair.org/my/conference?conf=pele2024
Submission Deadline: EXTENDED! Updated Deadline is June 12, 2024. June 7, 2024
Cristina Zepeda, Assistant Professor, Psychology and Human Development, Vanderbilt University
Theory to Scalable Educational Practice: Leveraging A/B Testing and Digital Learning Platforms to Improve Learning
For decades, the learning sciences have advanced our understanding of how people learn. However, translating these insights into practical improvements in authentic educational contexts remains a challenge. Recently, the emergence of conducting platform-enabled learning research has opened new avenues for conducting rigorous, scalable research to bridge this gap. Drawing from her experiences, Dr. Zepeda will provide a roadmap for how to productively leverage A/B testing and digital learning platforms to drive theory-grounded improvements at scale. Along the way, she will highlight key opportunities, pitfalls, and strategies for overcoming challenges.
Avery H. Closser, Postdoctoral Associate, Human Development and Family Science, Purdue University
Scaling Platform-Enabled Learning Research: From Building Capacity to Building Community
Many digital learning platforms have research infrastructures that enable A/B testing to study student cognition and improve the design of learning materials and environments. Over the past several years, multiple funding opportunities have incentivized improving these infrastructures as a necessary foundation for learning engineering research. Looking ahead, next steps necessitate shifting from building capacity within digital learning platforms to building community and collaboration with external researchers. In this keynote, Dr. Closser will share reflections on A/B testing and considerations for increasing participation in platform-enabled learning research.
9:00-10:00 Introduction and Keynote 1: Theory to Scalable Educational Practice: Leveraging A/B Testing and Digital Learning Platforms to Improve Learning Dr. Cristina Zepada
10:00-10:30 DLP Catalog Presentation: Warren Li
Break
10:45-12:00 Accepted paper presentations
Using a Platform to Run an Experiment Outside the Platform [PDF]
Benjamin Motz, Harmony Jankowski, Jennifer Lopatin, Waverly Tseng and Tamara Tate.
User-Centered Design to Democratize Research Experiments on Digital Learning Platforms [PDF]
Marshall An, Jessica Fortunato, Ilya Musabirov, Mohi Reza, Norman Bier, Joseph Jay Williams and John Stamper.
WAPTS: An Allocation Probability Adjusted Thompson Sampling Algorithm for Learner Sourcing Educational Setting
Haochen Song, Ilya Musabirov, Bingcheng Wang, Pan Chen, Ananya Bhattacharjee, Audrey Durand, Anna Rafferty and Joseph Williams.
12:00-1:00 Break for Lunch
1:00-1:45 Keynote 2: Scaling Platform-Enabled Learning Research: From Building Capacity to Building Community Dr. Avery Closser
1:45-2:45 Accepted paper presentations
Leveraging the E-TRIALS Platform to Advance the Science of Learning Through Randomized Controlled Trials: An Update [PDF]
Neil Heffernan, Cristina Heffernan and Li Cheng.
Augmenting an Open-source Adaptive Tutoring System (OATutor) with Large-Scale Experimentation Management (UpGrade) [PDF]
Shreya Bhandari, April Murphy, Zachary Pardos, and Benjamin Blanchard.
Adaptive Experimentation in Learning Engineering: Communication and Choice Strategies. A Study Design
Ilya Musabirov, Vsevolod Suschevskiy and Haochen Song.
Break
3:00-3:30 Invited Paper: An Integrated Platform for Studying Learning with Intelligent Tutoring Systems: CTAT+TutorShop [PDF]
Vincent Aleven, Conrad Borchers, Yun Huang, Tomohiro Nagashima, Bruce Mclaren, Paulo Carvalho, Octav Popescu, Jonathan Sewall and Kenneth Koedinger.
3:30-4:30 Discussion/"matchmaking" session: implementing research with DLPs
4:30-5:00 Closing remarks
Using a Platform to Run an Experiment Outside the Platform - Benjamin Motz, Harmony Jankowski, Jennifer Lopatin, Waverly Tseng and Tamara Tate [PDF]
Adaptive Experimentation in Learning Engineering: Communication and Choice Strategies. A Study Design - Ilya Musabirov, Vsevolod Suschevskiy and Haochen Song
Augmenting an Open-source Adaptive Tutoring System (OATutor) with Large-Scale Experimentation Management (UpGrade) - Shreya Bhandari, Zachary Pardos, April Murphy and Benjamin Blanchard [PDF]
User-Centered Design to Democratize Research Experiments on Digital Learning Platforms - Marshall An, Jessica Fortunato, Ilya Musabirov, Mohi Reza, Norman Bier, Joseph Jay Williams and John Stamper [PDF]
WAPTS: An Allocation Probability Adjusted Thompson Sampling Algorithm for Learner Sourcing Educational Setting - Haochen Song, Ilya Musabirov, Bingcheng Wang, Pan Chen, Ananya Bhattacharjee, Audrey Durand, Anna Rafferty and Joseph Williams
Leveraging the E-TRIALS Platform to Advance the Science of Learning Through Randomized Controlled Trials: An Update - Neil Heffernan, Cristina Heffernan and Li Cheng [PDF]
An Integrated Platform for Studying Learning with Intelligent Tutoring Systems: CTAT+TutorShop - Vincent Aleven, Conrad Borchers, Yun Huang, Tomohiro Nagashima, Bruce Mclaren, Paulo Carvalho, Octav Popescu, Jonathan Sewall and Kenneth Koedinger [PDF]
There is no simple path that will take us immediately from the contemporary amateurism of the college to the professional design of learning environments and learning experiences. The most important step is to find a place on campus for a team of individuals who are professionals in the design of learning environments — learning engineers, if you will. - Herbert Simon
Learning engineering adds tools and processes to learning platforms to support improvement research [2]. One kind of tool is A/B testing [3], which is common in large software companies and also represented academically at conferences like the Annual Conference on Digital Experimentation (CODE). A number of A/B testing systems focused on educational applications have arisen recently, including UpGrade[4] and E-TRIALS[5]. A/B testing can be part of the puzzle of how to improve educational platforms, and yet challenging issues in education go beyond the generic paradigm. For example, the importance of teachers and instructors to learning means that students are not only connecting with software as individuals, but also as part of a shared classroom experience. Further, learning in topics like mathematics can be highly dependent on prior learning, and thus A or B may not be better overall, but only in interaction with prior knowledge [6]. In response, a set of learning platforms is opening their systems to improvement research by instructors and/or third-party researchers, with specific supports necessary for education-specific research designs. This workshop will explore how A/B testing in educational contexts is different, how learning platforms are opening up new possibilities, and how these empirical approaches can be used to drive powerful gains in student learning. It will also discuss forthcoming opportunities for funding to conduct platform-enabled learning research.
For 2024, we are inviting two submission types:
Long papers: 8-10 page PDFs in CHI / ACM format (Word, LaTeX, Overleaf). References are not included in page limit.
Work-in-Progress & Demo papers: up to 4 page PDFs in CHI / ACM format (Word, LaTeX, Overleaf). References are not included in page limit.
Papers and demos may address issues with conducting A/B testing and learning engineering platforms, including topics such as:
The role of A/B testing systems in complying with SEER principles (https://ies.ed.gov/seer/), which set a high bar for the goals of empirical studies of educational improvement
Awareness of opportunities to use existing learning platforms to conduct research (http://seernet.org)
Managing unit of assignment issues, as arise when students are in classrooms with a shared teacher
Practical considerations related to experimenting in school settings, MOOCs, & other contexts
Ethical, data security, and privacy issues
Relating experimental results to learning-science principles
Understanding use cases (core, supplemental, in-school, out-of-school, etc.)
Accounting for interactions between the intended contrast (A vs. B) and learners’ prior knowledge, aptitudes, background or other important variables
A/B testing within adaptive software
Adaptive experimentation
Attrition and dropout
Stopping criteria
User experience issues
Educator involvement and public perceptions of experimentation
Balancing practical improvements with open and generalizable science
Workshops in 2020-2023 on this topic were very successful, with some of the highest registrations of any workshops at the conference. We welcome participation from researchers and practitioners who have either practical or theoretical experience related to running A/B tests and/or randomized trials as well as platform-enabled learning research. This may include researchers with backgrounds in learning science, computer science, economics and/or statistics.
References
[1] Herbert A. Simon. 1967. The job of a college president. Educational Record, 48, 68-78.
[2] Melina R. Uncapher (2018): From the science of learning (and development) to
learning engineering, Applied Developmental Science, https://doi.org/10.1080/10888691.2017.1421437
[3] Kohavi, R., Deng, A., Frasca, B., Walker, T., Xu, Y., & Pohlmann, N. (2013, August). Online controlled experiments at large scale. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1168-1176).
[4] Ritter, S., Murphy, A., Fancsali, S. Lomas, D., Fitkariwala, V. and Patel, N.. (2020). UpGrade: An open source tool to support A/B testing in educational software. L@S Workshop on A/B Testing at Scale.
[5] Ostrow, K.S., Heffernan, N.T., & Williams, J.J. (2017). Tomorrow’s EdTech Today: Establishing a Learning Platform as a Collaborative Research Tool for Sound Science. Teachers College Record, Volume 119 Number 3, 2017, 1-36.
[6] Fyfe, E. R. (2016). Providing feedback on computer-based homework in middle-school classrooms. Computers in Human Behavior, 63, 568-574. https://doi.org/10.1016/j.chb.2016.05.082
Fourth Annual Workshop on A/B Testing and Platform-Enabled Learning Research (L@S2023)
Third Annual Workshop on A/B Testing and Platform-Enabled Learning Research (L@S2022)
Second Workshop on Educational A/B Testing at Scale (L@S2021)
First Workshop on Educational A/B Testing at Scale [Proceedings] (L@S2020)
Steve Ritter, Carnegie Learning
Stephen Fancsali, Carnegie Learning
April Murphy, Carnegie Learning
Neil Heffernan, Worcester Polytechnic Institute
Joseph Jay Williams, University of Toronto
Jeremy Roschelle, Digital Promise
Ben Motz, Indiana University
Danielle McNamara, Arizona State University
Debshila Basu Mallick, Rice University