LAK22 Assess

Workshop on Learning Analytics and Assessment

MARCH 21, 2022
12noon - 3pm (PDT)
VIRTUAL SPACE

accepted abstracts

Presentation 1

Project Title: Leveraging Online Formative Assessments to Enhance Predictive Learning Analytics Models

Authors: Okan Bulut, University of Alberta; Seyma Nur Yildirim-Erbasli, University of Alberta; Guher Gorgun, University of Alberta; Yizhu Gao, University of Alberta; Tarid Wongvorachan, University of Alberta; Ka Wing Lai, University of Alberta; Jinnie Shin, University of Florida

Abstract: For several decades, researchers have advocated the use of formative assessments to gauge student learning, identify students’ unique learning needs, and adjust instructional approaches. As universities around the world have begun to use learning management systems (LMS), more learning data have become available to gain deeper insights into students’ learning processes and make data-driven decisions to improve student learning. With the availability of rich data extracted from LMS, researchers have turned much of their attention to learning analytics using data mining techniques. To date, various learning analytics models have been developed to analyze and forecast student achievement in face-to-face, online, and hybrid university courses. These models often involve event logs, clickstream data, timestamps of different learning activities, and assessment results to predict future learner outcomes. In this study, we propose to use data extracted from online formative assessments as a starting point for building predictive learning analytics models. Using LMS data from multiple offerings of an asynchronous undergraduate course in a western Canadian university, we analyzed the utility of online formative assessments and other learning activities in predicting final course grades. Our initial findings show that the variables extracted from online formative assessments (e.g., completion, timestamps, and grades) serve as important predictors of student performance in online settings. Using formative assessments as a foundation yielded a predictive model that can be further enhanced using additional variables based on student-content interactions. Our findings emphasize the need for using online formative assessments to build more effective learning analytics models.

Presentation 2

Project Title: Enhancing Assessment through Sound Learning Design

Authors: Blaženka Divjak, University of Zagreb; Barbi Svetec, University of Zagreb

Abstract: The Balanced Learning Design Planning (BDP) concept and tool build on existing learning design (LD) concepts and tools, implementing contemporary research findings and theory, and providing innovation to support innovative pedagogies. The concept and tool are strongly based on intended LOs, aiming to ensure their alignment at the study program and course level. A great emphasis is put on constructive alignment between course LOs, teaching and learning activities (TLA) and assessment. Therefore, the BDP concept and tool include innovative functionalities supporting the development of meaningful assessment, which can assess, but also enhance student learning. Importantly, the focus is on ensuring assessment validity by assigning study program and course LOs with relative weights, which we propose to be done using multi-criteria decision-making methods. At the course level, the relative weights of LOs can be distributed among chosen assessment tasks. The tool supports detailed assessment planning related to particular TLAs, and provides curriculum analytics which can enhance the planning, minding the intended pedagogical approach. It gives analytics of intended LOs’ coverage through particular topics and corresponding assessment tasks, including formative or summative assessment. It also demonstrates the division of relative assessment weights, related to the weights assigned to particular course LOs. The current step in the development of the tool refers to its integration with the LMS, which will provide actual implementation data, subject to further learning analytics. The BDP concept and tool are used in the development of several courses, including one MOOC, which would be presented as an example.

Presentation 3

Project Title: Lessons Learned: Reflections on designing and implementing a mid-course student feedback system

Author: Bradley Coverdale, University of Maryland Global Campus

Abstract: When the classroom is online it can be difficult to understand what students need in order to be successful. If one waits until the course evaluation to make changes to a course, it may be too late to address the issue. This paper will explore the challenges and issues faced when trying to implement a midcourse feedback system on nine newly designed online courses. It will also address the structures and policies that were built based on submitted feedback to address student needs using backwards design. It is not very helpful to provide additional data to student services if they don’t have the capacity or knowledge to apply the findings. This midcourse data will be potentially useful to several stakeholders depending on the content. There could be an immediate issue that needs to be addressed such as an error accessing some content in the online classroom. Additionally, the formative feedback could signal a need for external support via student services like advising or tutoring. This data could also be useful when considering potential course revisions by program faculty or if additional training is needed for instructors regarding a clearer approach.

Presentation 4

Project Title: Investigating the reliability of learning analytics measures aggregated over short and long periods

Authors: Yingbin Zhang, University of Illinois at Urbana-Champaign; Luc Paquette, University of Illinois at Urbana-Champaign

Abstract: Process data, such as action logs and thinking-aloud data, have often been aggregated to extract measures of various constructs that may be of interest for learning analytics, such as engagement and self-regulated learning. Aggregation may be acceptable for process data for short periods of time, such as minutes and hours, since the learners’ behaviors may be assumed to be relatively stable during such periods. However, the aggregation would be more convincing if evidence about the reliability and validity of aggregated measures is provided. Such practice is usually missing in current learning analytics research. This presentation shows how statistical reliability and validity metrics can be computed by segmenting the period into several intervals, considering the measure aggregated within each interval as an item, and applying traditional psychometric methods, such as confirmatory factor analysis, to items of different intervals. We use procrastination in an introductory computer science course as an example to illustrate this method. Aggregation may be problematic over long periods of time, such as weeks or months, as systematic changes may occur in learning behaviors over time. Aggregation may mask individual variation in behavioral changes, which may reveal individual differences in self-regulation and are critical for understanding learning. We continue using procrastination as an example to illustrate this point. In summary, measures aggregated over a short period should be accompanied with evidence of reliability and validity, while aggregation over a long period is not recommended.

Presentation 5

Project Title: Towards putting educators back on the driving seat of learning analytics

Authors: Maha S K Al-Anqoudi, University of Glasgow; Dr Nikos Ntarmos, University of Glasgow; Dr Mireilla Bikanga Ada, University of Glasgow

Abstract: Online learning tools have exploded in popularity over the past few years. This has led to a deluge of student-system interaction data(Ashwin, P. and McVitty, D., 2015) being captured and analysed, in turn making the topic of learning analytics a highly researched one. The vast majority of existing approaches are primarily data-driven(Shen, S., Liu, Q., Chen, E., Wu, H., Huang, Z., Zhao, W., Su, Y., Ma, H. and Wang, 2020), taking the captured data at face value and trying their best to analyse and infer conclusions from them. However, it is a well-known problem (Cope, Kalantzis and Searsmith (2020)) that this leaves much to be desired when analysing the learning. It is our thesis that the root of the problem lies in the data that is collected. To this end, we propose taking a step back and putting educators first and foremost. Educators are prime experts in devising and adapting activities (e.g., discussion forums, field studies, assessments, etc.), that are highly efficient in aiding them to assess the level of understanding of their students. What we propose is driving the data collection methodology by first focusing on those activities that are highly valued by educators, then identifying key events in the process to be monitored, and utterly transforming them into quantifiable indicators that can form a more valuable input to state-of-the-art machine learning methods. We are keen to discuss our thesis, backed by early findings of a study we conducted with educators who engaged in online teaching during the COVID19 pandemic.

Presentation 6

Project Title: Early Identification of At-Risk Students in Introductory Engineering Courses

Authors: Huanyi Chen, University of Waterloo, ON, CA; Paul A.S. Ward, University of Waterloo, ON, CA

Abstract: In first-term engineering it is all-too-frequently the case that the first sign of a student being in trouble is poor midterm scores. While we can, and do, attempt to rescue students who have performed poorly in midterms, there remains a stubbornly high correlation between midterm performance and final outcome. Simply put: the first sign of trouble has come too late to fix the problem. To address this problem, we initiated a project to identify at-risk students based on their behavioural interaction with our auto-grading platform focusing on pre-midterm assignments which start as early as the first day of classes. By analyzing the auto-grading interaction data, we have clearly observed behavioural changes pre- and post-midterm. Notable changes include submission time, the assignment performance, and the number of submissions. For example, while at-risk students increased the average number of submissions to the auto-grader by roughly 50 percent and thought they are upping their game, but everyone is because the course becomes more difficult, therefore, they still failed the final exam. In addition to this, we are currently attempting to identify what behavioural changes might help students recover, vs. those that do not. Specifically, we are working on changing our platform from being primarily an auto-grading one into one that focuses less on the grade and more on the necessary feedback that might attain the require behavioural changes to turn their situation around. In summary, we are trying to discern what feedback that we can provide will most effectively help them in learning.

Presentation 7

Project Title: Are all assessments created equal? Investigating assessments and learning in first-year Bachelor courses at an Economics and Business faculty.

Authors: Pavani Vemuri, KU Leuven; Monique Snoeck, KU Leuven; Stephan Poelmans, KU Leuven

Abstract: Several studies have demonstrated the usefulness of learning analytics (LA) in tracking, studying, gaining insights on student progress. The results from predictive algorithms and visualizations facilitate timely teacher interventions, help students self-regulate, and inform course or curriculum designers on improving course contents and design. While many advances have been made in feature development and prediction, there is much less understanding of the use, aspects, and improvement of formative or summative assessments that unfold learning. Learning could happen either at the pace set by a teacher throughout the semester (enforced by mandatory assessments) or the natural rhythm and timing when a student studies. Knowing the relative importance of timing versus score on attempts can help teachers decide whether tests should be mandatory and allow multiple attempts. The current longitudinal data study proposes improving understanding of the use of assessments in first-year bachelor's programs at an economics and business faculty offering blended or online courses. Particularly, we investigate if some assessments are more predictive than others and if the pace of the course enforced by mandatory assessments improves student success. Our approach is to enhance existing techniques focusing on feature selection, including timing, scores, and assessment-related mined-sequences to detect the relation and influence of assessments on where and when learning unfolds. Furthermore, we conclude that LA could help improve assessments and distinguish between courses and tests that need stricter deadlines versus those that can allow students to follow their own rhythm or pace.