focal areas

Cognitive Diagnosis and Interviewing Methods

Members: Drs. Leighton (Lead), Daniels

Think-aloud and cognitive lab interview procedures are used for test development and validation studies in educational measurement; particularly, for development and/or validation of items measuring higher-level cognitive processes. Because of the apparent ease in implementing think-alouds and cognitive labs, relatively little attention has been paid to how variations in these methods can alter or bias the content of response processes and subsequent verbal reports. Research of interview methods contributes to educational theory and practice. On the theoretical side, this research can inform how distinct verbal probes (i.e., specific language, clarification questions) influence participants’ cognitive or response processes and how contextual procedures (e.g., digital versus face to face environments) can lead to specific biases in responses. On the practical side, these research results can be used to inform development and validation of test items, which in some cases are part of high-stakes testing programs.

Competency-Based Assessment

Members: Drs. Poth (Lead), Gierl, King

Competency-based education is a model that approaches instruction and assessment from an outcomes-focused perspective that is learner-centred and emphasizes progression towards competence. Competencies are defined as specific, demonstrable skills or knowledge necessary for the practice of a profession and the identification of competencies must precede the development of competency-based approaches to training. We seek to understand how learning and assessment experiences across different contexts (coursework, clinical settings, practicums) can support the development of competent professionals (teachers, researchers, physicians). Examples of current projects include: (a) development and validation of mixed methods research competencies; (b) assessing the impacts of a competency-based assessment system with medical residents; and (c) examining the key features of a competency-based approach to program evaluation.

Computer-Based Testing

Members: Drs. Bulut (Lead), Gierl, Cormier, Daniels, Cutumisu

Test delivery marks an important paradigm shift for educational assessment. Simply put, large-scale testing is no longer feasible or desirable when delivered in a paper-based format because printing, scoring, and reporting require tremendous time, effort, and expense. As the demand for more frequent testing escalates, the cost of administering paper-based tests will only continue to increase. One solution that can help curtail some of the test delivery cost while providing important benefits for examinees is to transfer to a computer-based testing (CBT) system. CBT permits testing on-demand thereby allowing examinees to write their exams on a more frequent and flexible schedule. CBT provides examinees with real-time scoring and immediate feedback. CBT supports the development of multimedia item types that allows test developers to measure more complex performances. In addition, the adaptive form of CBT (known as computerized adaptive test - CAT) allows measuring students' performance more accurately with much fewer test items. Because of these important benefits, the broad adaption and wide-spread transition to computerized testing is now underway. Our area focuses on developments in CBT and CAT.

Innovations in Research Methods

Members: Drs. Gierl (Lead), Poth

Developing new research practices is necessary for existing methods to keep pace with a changing world. Examples of current projects include: (a) development of a complexity-sensitive approach to mixed methods research; (b) developments in automatic item generation methods; and (c) developments in automated essay scoring.

Learning Analytics

Members: Drs. Cui (Lead), Gierl, Leighton, Bulut, Cutumisu

Advances in computer technology have opened the door to the development of technology-based assessments with dynamic, interactive, real-world tasks that allow students to demonstrate complex competencies and higher order thinking skills. This new form of assessment has the potential to reform the types of learning that can be assessed as well as the type of information that can be gathered as evidence for measuring and interpreting learning. Traditional psychometric models designed for well-structured student responses are no long adequate. To analyze the more complex data types generated from technology-based assessment such as log files, eye tracking and facial expression data, we seek to use data mining and machine learning techniques to analyze the unstructured data to discover patterns and features that could predict performance.

Motivation and Assessment

Members: Drs. Daniels (Lead), Leighton, Cormier, Cutumisu, Poth

Motivation refers to the reasons people choose to do things. At school, students often do things for grades - and teachers often grade things to ensure students do them. This process, however, represents a type of external motivation that can undermine students’ natural motivation and drive to learn. We seek to understand how assessment influences motivation and how motivation might be leveraged to change the way teachers use assessments.

Performance-Based Assessment

Members: Drs. Cui (Lead), Cutumisu, Leighton

Performance-based assessments constitute more authentic tools to measure student learning and are used as alternatives to traditional multiple-choice tests. Specifically, they aim to measure learners' abilities to apply the knowledge and skills acquired from instruction. These types of assessments require students to demonstrate procedural knowledge and abilities by engaging in activities such as solving challenges or creating artifacts. Digital performance-based assessments can capture learners’ actions as they solve a challenge, focusing on both learning processes and learning outcomes.

Psychological Assessment

Members: Drs. Cormier (Lead), Leighton, Bulut, Cutumisu

This program of research examines psychological assessment at a number of levels including the psychometric properties of tests, the development of measures representing psychological constructs (e.g., human intelligence, learning, development), and the implications for the practice of school and clinical psychology. Not surprisingly, measurement and applied psychometrics are integral to the research that is produced at all these levels. Examples of current projects include: (a) the relationship between cognitive performance and learning; (b) the linguistic demand of psychological tests; and (c) the development of strong psychological assessment skills (i.e., training).

Socio-emotional Variables in Learning and Assessment

Members: Drs. Leighton (Lead), Cui, Daniels, Cutumisu

Educational assessment of student learning outcomes is often discussed in relation to cognitive strengths and weaknesses (and/or curriculum standards), but less often in light of students’ social and emotional profiles. Social and emotional variables are fundamental to understanding the whole learner and providing essential context to help overcome specific challenges and ready students for long-term achievement. Therefore, important areas of educational measurement research include developing theoretically-driven instruments to assess: (a) students’ relationships with teachers (e.g., secondary attachment; trust) and other students (e.g., empathic relations) in typical learning environments, and (b) students’ working (mental) models of learning (e.g., achievement goals; attitudes towards mistakes; openness to discussing mistakes; mindset) within environments where they will express their performances and receive feedback in light of their performances. Challenges in this area include integration of psychological theory and measurement, development of instrumentation, and defensible validation practices.