LAK21 Assess
Workshop on Learning Analytics and Assessment
APRIL 13, 2021 - DAY 2
4 PM TO 7 PM PDT
Click here to access Google Doc for notes
In case of any issues with Zoom access, please email: mladen.rakovic@monash.edu
Click here to access Google Doc for notes
In case of any issues with Zoom access, please email: mladen.rakovic@monash.edu
Roundtable 1
Project title:
Using Confirmatory Factor Analysis to Validate a Theorized Model of Reflection Assessment in Writing Analytics
Authors:
Ming Liu, mingliu@swu.edu.cn, Southwest University; Kirsty Kitto, Kirsty.Kitto@uts.edu.au, University of Technology Sydney; Simon Buckingham Sham, Simon.BuckinghamShum@uts.edu.au, University of Technology Sydney
Abstract:
Reflective thinking skills are very important in authentic, experiential learning, in both formal academic and workplace contexts. A range of reflection assessment frameworks and associated rubrics have been validated in the literature, providing the conceptual foundation for reflective writing analytics (RWA) to assist in tracking progression in reflective thinking. Moreover, we have already shown that RWA enables instant, 24/7 formative feedback on drafts, which no human teaching team can provide. We are now developing a five-factor model of the Capability for Written Reflection (CWRef), grounded in reflective writing pedagogy. We use Confirmatory Factor Analysis to validate the CWRef model by examining the relative contributions of textual features to each factor in the model, and their contributions to CWRef. The model has been evaluated with two reflective writing corpora from different disciplines, showing which textual features were significant indicators of factors in both corpora. Our results highlight the usefulness of these high level textual features in writing analytics and also indicate that the reflective writing context was an important factor influencing the validity of the CWRef model. We propose that this new analytical assessment model could enable progression tracking in reflective writing, for improved formative feedback.
Roundtable 2
Project title:
Utility framework of course assessment supported by learning analytics
Authors:
Blaženka Divjak, bdivjak@foi.hr, Petra Žugec pzugec@foi.hr, University of Zagreb, Faculty of Organization and Informatics
Abstract:
The assessment should be constructed and analysed holistically. Since the assessments do not function in isolation there are two important aspects. First, not one assessment task, but instead, the complete assessment programme with carefully prepared assessment set should be considered. Further, an assessment programme should be evaluated according to an overarching framework. According to (Van der Vleuten & Schuwirth, 2005) the utility framework for assessment depends on five factors: reliability, validity, educational impact, acceptability and the costs of assessment. Our aim is to use learning analytics to support evaluation of assessment set according to the utility framework. For the whole assessment set it is vital to clearly link the assessment to the intended learning outcomes (LOs) which enables the improvement of validity. We used the analytic hierarchy process (AHP) and the analytic network process (ANP) methods to select and determine weights of evaluation criteria and the consequent weights of LOs. Further, we analysed different approaches to construct the reliability composite index for the assessment programme on the course level. Based on that and on the data from the LMS we addressed the reliability of the assessment set and tasks within different courses. It is important to construct the formative assessment tasks that support summative tasks. The linkage between formative and summative assessment tasks is essential for timely feedback to students and for the educational effect of assessment. Finally, the acceptability depends on different stakeholders' perspectives. Students' perspective can be gathered through a questionnaire.
Roundtable 3
Project title:
Random forest model for the prediction of student outcomes based on dual language enrollment status and other factors
Author:
Thomas Y. Chen, thomasyutaochen@gmail.com, Academy for Mathematics, Science, and Engineering
Abstract:
Assessing the efficacy of multilingualism in education is a key aspect of its study. In an increasingly global world, multilingualism, including dual language education, is increasingly common and necessary. A metric commonly used to assess students in the United States is standardized testing scores, such as the Scholastic Aptitude Test (SAT). In this work, we present a novel machine learning method to assess the benefits of dual language education. Specifically, we utilize a random forest (RF) model (an ensemble of decision trees) to predict U.S. student outcomes as stipulated by SAT scores. The input of the model consists of factors including dual language enrollment status, income level, region, and race. Each data point representing a student is labeled; we split the dataset in a 0.8:0.2 ratio for training and testing, respectively. Subsequently, we perform ablation studies by removing certain elements of the input modalities and compare results on the testing set. We find that an immersed multilingual education does indeed correlate to a higher SAT score. Our supervised baseline model achieves a 89.8% accuracy rate. This machine learning methodology tackles the problem of determining multilingualism’s benefits in education in a novel manner.
Roundtable 4
Project title:
Perceptions of science: towards an automated assessment of epistemological beliefs about science
Authors:
Melanie Peffer, melanie.peffer@colorado.edu, University of Colorado Boulder
Abstract:
Science literacy is an essential goal of science education. It includes basic science knowledge, the ability to think critically about scientific evidence, and the skills to examine scientific information in order to be informed regarding the scientific consensus on issues such as vaccine schedules or global climate change. Developing a scientifically literate society requires fostering sophisticated epistemological beliefs about science. Epistemological beliefs about science are beliefs an individual has about the nature of science knowledge. Although widely acknowledged as important for attaining science literacy, there is a lack of consensus regarding what it means for an epistemological belief about science to be sophisticated, particularly as it relates to conceptions of how science works both within and across disciplines. A major barrier to addressing these questions is the lack of robust methods for assessing epistemological beliefs about science. Some propose that examining student practices in an authentic science activity, like inquiry, is reflective of underlying beliefs about the nature of science knowledge. Prior work shows that student inquiry practices captured in Science Classroom Inquiry (SCI) simulations existed on a continuum of more-to-less expert-like and correlated with background knowledge of epistemological beliefs about science and not experience with biology content knowledge. This suggests that student practices in SCI may reflect underlying epistemological beliefs about science. Current work is to establish SCI as a reliable and valid practices-based assessment of epistemological beliefs about science and use it to understand the relationship between disciplinary specific inquiry and the nature of science knowledge.