974days since
Next meeting

Shelley Ross Discussion Notes from Feb 2010 Journal Club

posted Feb 18, 2010, 9:50 AM by Anna Oswald

Supervisor and self-ratings of graduates from a medical school with a problem-based learning and standard curriculum track

 Distlehorst, Dawson, & Klamen (2009) Teaching and Learning in Medicine, 21, 291-298

 

Reason for choosing article:

This article was intriguing in that it looked at long-term evaluation of graduates of a PBL program. Given our relatively recent foray into a PBL curriculum, I felt this article would be of relevance to this Faculty.

 

Summary of article:

Southern Illinois University School of Medicine has two curriculum tracks: a standard curriculum (STND), and a problem-based learning curriculum (PBL). They also have a long-term follow-up project where they collect data from and about their graduates as they progress through residency. In this study, ratings of residents were obtained across 3 main categories. Ratings were collected at the end of Years 1 and 3 of residency programs, and included graduates from 9 classes (1994-2002). Ratings were self-reports from graduates, and residency supervisor ratings of residents. The research questions all looked at differences between self- and supervisor ratings of the residents, looking specifically for changes over time and between programs. The researchers found that supervisor ratings did not differentiate between the STND and PBL groups at the end of Year 1, but did differentiate between the two at the end of Year 3. Supervisors rated STND graduates higher than PBL graduates in 5 of 6 noncognitive items and 2 of 3 general ratings. Supervisor ratings increased between Year 1 and Year 3 in 9 competencies for STND graduates, but showed no change across the two data collection periods for PBL graduates.  The largest increase in self-ratings between Year 1 and Year 3 for both STND and PBL graduates was in the area of overall competence in specialty area. The researchers do not draw any conclusions about differences between the STND and the PBL curricula.

 

Comments on the article:

The researchers list several short-comings to this study, and state that these short-comings are the reason why they do not present any conclusions about PBL compared to a traditional curriculum. Interestingly, they have one major finding that supports PBL curricula: there is good concordance of ratings between PBL residents and supervisors in all but three areas. STND self-ratings differ from supervisor ratings in 11 areas. Accurate self-assessment was not one of the research questions, however, and so this difference was not highlighted in the article, nor was it elaborated upon. This is unfortunate, as self-assessment is an area where physicians have difficulty. If PBL results in physicians who are better at self-assessment, it is worth talking about.

The reporting in this article was interesting. Total numbers of participants were not given for residents or supervisors, only percentage response rates. The response rates were aggregated across the full study, so there was no way to determine if there was a change over time for the PBL group – which would be expected, as the PBL curriculum was continuously refined over those years.

The long-term project that provided the data for this project is something worth considering here.


The effects of performance-based assessment criteria on student performance and self-assessment skills.

 

Fastre, van der Klink, & van Merrienboer (2010) Advances in Health Sciences Education

 

Reason for choosing article:

This article reports findings of a study comparing performance-based and competency-based assessment criteria. Competency-based assessment is a hot topic in medical education right now; I found it intriguing that this article finds that performance-based assessment criteria resulted in better outcomes.

 

Summary of article:

 The authors present background theory on the differences between competency-based and performance-based assessment criteria. They argue that for novice learners, competency-based assessment criteria are too vague and undifferentiated. They posit that novice learners need clear performance-based assessment criteria, broken down into lower level skills hierarchies. Their hypotheses are that novice learners given performance-based assessment criteria will learn better, and self-assess better, than their counterparts given competency-based assessment criteria. They also hypothesize that the competency-based assessment group will experience less mental effort in their learning. Thirty-nine second year students (2 males, 37 females; mean age = 18) in a nursing program at a European school.  Students were taught stoma care through lecture, asked to judge several video examples of the procedure, and then did a practical example. A short MC quiz was administered after the lecture. Students were given either performance-based or competency-based criteria to assess the video example. Students used these same criteria to assess each other and to self-assess in the stoma care procedure (a teacher also assessed students in the practical portion).  Students completed a questionnaire on their perceptions of relevance of self-assessment and their ability to self-assess before the study began, and another questionnaire at the end of the study measuring motivation, self-regulation, interest, task orientation, and reflection. Between each assessment task, students completed a rating scale of mental effort.  The researchers found that while both groups were at equivalent knowledge levels after the lecture, across the video assessments, peer assessments, and teacher assessments of student performance, the group given the performance assessment criteria scored significantly higher than did the group given the competency-based assessment criteria. The group with the performance-based assessment criteria also reported significantly less mental effort during the assessment. There was no significant difference between the groups on self-assessment. The authors conclude that performance-based assessment criteria allow novice learners to learn more efficiently, and to have a better understanding of what is expected of them. The authors state that these findings “yield the clear guideline that novice students should be provided with performance-based assessment criteria in order to improve their learning process, and reach higher test task performance”.

 

Comments on article:

This article falls squarely into the debate of what is the difference between a competency and a performance outcome? The authors make a very clear demarcation between performance criteria and competency criteria, following the definitions of Gregoire (1997), defining competencies as the constellation of skills, knowledge, and attitudes. Performance-based assessment criteria are broken down into higher-level or a number of lower-level criteria. The conclusions of the authors depend on these fairly rigid definitions.

The article is extremely well-written and well constructed. The arguments are presented in a clear fashion. However, the conclusions of the article rely on the assumption that there is no finer granularity to competencies than general statements of constellations of skills, knowledge and attitudes. The authors do not allow for competencies being stated as measurable outcomes.

The conclusions reached by the authors over-reach. The n was small, and the group was not representative (37 females to 2 males). Further, this was a discrete procedural task, made up of a distinct set of steps. Higher order thinking and learning was not needed. Finally, the competency-based assessment criteria were very vague, while the performance-based criteria were highly detailed. I would be cautious about interpreting these results given the bias shown in the assessment criteria.

 

Comments