Technical Assistance
Learning Engineering Virtual Institute - Trialing Hub
In our capacity as the Trialing Hub for LEVI, we have completed review and enhancement of project planning and reporting documents to ensure clarity, precision, and alignment with project goals. We have created dataset templates and descriptions for metadata, ensuring that data is accessible, understandable, and usable. Our approach involves integrating feedback and modifying synthesis documents to reflect new insights and findings, ensuring common understanding and responsive guidelines. We have synthesized research studies, compiled metadata, and generated comprehensive summary reports, offering a holistic view of completed research studies. We have also developed categorization frameworks to classify research studies based on research design methods and outcome variables, providing invaluable insights into project impacts. Additionally, we coordinate with other research hubs to support overarching goals like LEVI's mission to double math learning among critical populations, fostering collaboration and driving success.
AIMS Collaboratory
The AIMS Collaboratory is a community of practice for the R&D networks to share findings related to the products, instructional strategies and professional development supports they are developing, testing, and refining. We support the ongoing need to coordinate the data collected to better understand the impact of data generated within the community, conduct new analysis across projects, and plan the services needed for external researchers to conduct new analysis with AIMS teams on their own and through subsequent grantmaking.
AI Infrastructure Technical Community of Practice (TCoP)
The Math AI Infrastructure Technical Community of Practice (TCoP) showcases work from nine research and development teams advancing the use of AI in math education to improve equitable outcomes for students. These projects are producing reusable datasets, benchmarks, evaluation tools, and public data challenges to support the broader education and AI research communities. LDI's role in this project supports the advancement of the coordination of common activities, inclusion of education technology stakeholder requirements, and identification of lessons learned across projects.
Digital Learning Platforms (DLP) Research Catalog
Our team developed the AIMS Digital Learning Platform Research Catalog (http://www.dlpcatalog.org) which supports AIMS R&D teams to increase awareness of the datasets and research platforms created through their projects. The Catalog allows teams to provide transparency into the research assets of their data and platforms (with information about available datasets, not the data itself), and help potential researcher partners communicate with designated platform team members. The main goal of the Catalog is to (1) facilitate research connections (especially with early career researchers from priority populations) and (2) reduce the cost and time needed to increase the speed of conducting research and test new ideas, with an emphasis on helping early career researchers from priority populations.
Generative Artificial Intelligence (GenAI) Evidence Hub for Assessment
The GenAI Evidence Hub for Assessment will build a measurement-science framework and apply it to published studies of generative-AI tools used in educational assessment. The framework targets three psychometric properties—validity, reliability, and fairness—and will be used to examine more than 250 papers in the following domains: item generation, quality assurance, formative feedback, automated scoring, and multimodal applications.
More information here.
Research
Whitmer, J., Pedro, S. S., Liu, R., Walton, K. E., Moore, J. L., & Lotero, A. A. (2019). The constructs behind the clicks. ACT Research Report, 26.
Whitmer, J., Nasiatka, D., & Harfield, T. (2018). Do students notice notifications? Large-scale analysis of student interactions with automated messages about grade performance. Proceedings of the 8th International Conference on Learning Analytics and Knowledge.
Whitmer, J. (n.d.) The impact of student-facing LMS dashboards. Blackboard.
Dietrichson, A., Forteza, D., & Whitmer, J. (2019). Meta-predictive retention risk modeling: risk model readiness assessment at scale with x-ray learning analytics. In LALA (pp. 79-88).
_________________________________________________________________________________________________________________________________________________
Andres, J. M. A. L., Hutt, S., Ocumpaugh, J., Baker, R.S., Nasiar, N., & Porter, C. (2021). How anxiety affects affect: A quantitative ethnographic investigation using affect detectors and data targeted interviews. In Proceedings of the 3rd International Conference on Quantitative Ethnography.
Andres, J. M., Baker, R. S., Hutt, S. J., Mills, C., Zhang, J., Rhodes, S., & DePiro, A. (2023). Anxiety, achievement, and self-regulated learning in CueThink. In Proceedings of the 17th International Conference of the Learning Sciences-ICLS 2023, pp. 258-265.
Ocumpaugh, J., Hutt, S., Andres, J. M. A. L., Baker, R. S., Biswas, G., Bosch, N., Paquette, L. & Munshi, A. (2021). Using Qualitative Data from Targeted Interviews to Inform Rapid AIED Development. In Proceedings of the 29th International Conference on Computers in Education.
Andres, J. M. A. L. (2023). Finding the Panic Button: Contextualizing Anxiety Within Interactive Learning Environments (Doctoral dissertation, University of Pennsylvania).
_________________________________________________________________________________________________________________________________________________
Li, W. & Brooks, C. (2024, July). Influence on Judgements of Learning Given Perceived AI Annotations. In Proceedings of the 10th ACM Conference on Learning@Scale (pp. 221-231).
Singh, A., Brooks, C., Wang, X., Li, W., Kim, J., & Wilson, D. (2024, March). Bridging Learnersourcing and AI: Exploring the Dynamics of Student-AI Collaborative Feedback Generation. In Proceedings of the 14th International Learning Analytics and Knowledge Conference (pp. 742-748).
Li, W., Sun, K., Schaub, F., & Brooks, C. (2022). Disparities in Students’ Propensity to Consent to Learning Analytics. International Journal of Artificial Intelligence in Education, 32(3), 564-608.
Li, W., Brooks, C., & Schaub, F. (2019, March). The Impact of Student Opt-Out on Educational Predictive Models. In Proceedings of the 9th International Learning Analytics and Knowledge Conference (pp. 411-420).