Background/Call for Papers
There is no simple path that will take us immediately from the contemporary amateurism of the college to the professional design of learning environments and learning experiences. The most important step is to find a place on campus for a team of individuals who are professionals in the design of learning environments — learning engineers, if you will. - Herbert Simon
Learning engineering adds tools and processes to learning platforms to support improvement research [2]. One kind of tool is A/B testing [3], which is common in large software companies and also represented academically at conferences like the Annual Conference on Digital Experimentation (CODE). A number of A/B testing systems focused on educational applications have arisen recently, including UpGrade[4] and E-TRIALS[5]. A/B testing can be part of the puzzle of how to improve educational platforms, and yet challenging issues in education go beyond the generic paradigm. For example, the importance of teachers and instructors to learning means that students are not only connecting with software as individuals, but also as part of a shared classroom experience. Further, learning in topics like mathematics can be highly dependent on prior learning, and thus A or B may not be better overall, but only in interaction with prior knowledge [6]. In response, a set of learning platforms is opening their systems to improvement research by instructors and/or third-party researchers, with specific supports necessary for education-specific research designs. This workshop will explore how A/B testing in educational contexts is different, how learning platforms are opening up new possibilities, and how these empirical approaches can be used to drive powerful gains in student learning. It will also discuss forthcoming opportunities for funding to conduct platform-enabled learning research.
For 2025, we are inviting two submission types:
Papers: 5-10 page PDFs in CHI / ACM format (Word, LaTeX, Overleaf). References are not included in page limit.
Work-in-Progress & Demos: up to 4 page PDFs in CHI / ACM format (Word, LaTeX, Overleaf). References are not included in page limit.
Papers and demos may address issues with conducting A/B testing and learning engineering platforms, including topics such as:
The role of A/B testing systems in complying with SEER principles (https://ies.ed.gov/seer/), which set a high bar for the goals of empirical studies of educational improvement
Awareness of opportunities to use existing learning platforms to conduct research (http://seernet.org)
Managing unit of assignment issues, as arise when students are in classrooms with a shared teacher
Practical considerations related to experimenting in school settings, MOOCs, & other contexts
Ethical, data security, and privacy issues
Relating experimental results to learning-science principles
Understanding use cases (core, supplemental, in-school, out-of-school, etc.)
Accounting for interactions between the intended contrast (A vs. B) and learners’ prior knowledge, aptitudes, background or other important variables
A/B testing within adaptive software
Adaptive experimentation
Attrition and dropout
Stopping criteria
User experience issues
Educator involvement and public perceptions of experimentation
Balancing practical improvements with open and generalizable science
Workshops in 2020-2024 on this topic were very successful, with some of the highest registrations of any workshops at the conference. We welcome participation from researchers and practitioners who have either practical or theoretical experience related to running A/B tests and/or randomized trials as well as platform-enabled learning research. This may include researchers with backgrounds in learning science, computer science, economics and/or statistics.
References
[1] Herbert A. Simon. 1967. The job of a college president. Educational Record, 48, 68-78.
[2] Melina R. Uncapher (2018): From the science of learning (and development) to learning engineering, Applied Developmental Science, https://doi.org/10.1080/10888691.2017.1421437
[3] Kohavi, R., Deng, A., Frasca, B., Walker, T., Xu, Y., & Pohlmann, N. (2013, August). Online controlled experiments at large scale. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1168-1176).
[4] Ritter, S., Murphy, A., Fancsali, S. Lomas, D., Fitkariwala, V. and Patel, N.. (2020). UpGrade: An open source tool to support A/B testing in educational software. L@S Workshop on A/B Testing at Scale.
[5] Ostrow, K.S., Heffernan, N.T., & Williams, J.J. (2017). Tomorrow’s EdTech Today: Establishing a Learning Platform as a Collaborative Research Tool for Sound Science. Teachers College Record, Volume 119 Number 3, 2017, 1-36.
[6] Fyfe, E. R. (2016). Providing feedback on computer-based homework in middle-school classrooms. Computers in Human Behavior, 63, 568-574. https://doi.org/10.1016/j.chb.2016.05.082