09   Should knowledge be accessed in OSCE/OSPE


Real knowledge is to know the extent of one's ignorance. 

~Confucius

Contents:

Homepage - OSCE and OSPE

Compiled Responses

 The Beginning

 01.   Introduction and Basics of OSPE/OSCE

 02.   Is OSPE glorified spotting.. 

  03.   How to set up OSCE and OSPE stations..

 04.   How to collect the answer-sheets in OSCE/OSPE from students.

 05.   OSCE reliability

 06.   Levels of competencies in evaluation

 07.   Practical experiences of OSPE/OSCE

 08.   OSCE in testing Communication and attitudes

  09   Should knowledge be accessed in OSCE/OSPE

 10.   Simulated patients

 11. Examiner training...a must to do...

12.   Standard setting in OSCE

 13.   Disadvantages of OSCE

 

Should knowledge be assessed in OSCE/OSPE ?


Barathi : As we know that in clinical/practical examination we are focusing on "skills" should we waste our time on asking questions that are meant to check the knowledge?
Bhatti: Some times the skill you want to test takes very little time especially in ophthalmology although the importance of the skill may be immense. In such cases I think we can incorporate the knowledge part also giving a clinical scenario. I am attaching an OSCE station and marking scheme for everybody's comments.
Barathi: Bhatti sir thanks for your response. Are you still in the middle of the middle east or down in the east.
Please find my response. I hope Sajjad will give his comments shortly.
Please read the comments Dr TS has give when I posted the OSPE after which I was searching to reason sirs comment and I found the following which may of use to you and others.
First despite our intuitions, a number of studies have shown that OSCE stations which use a single or a few global ratings on seven point scales have as high reliability and greater validity (expressed in terms of relationship to the remainder of the test) than the OSCE stations derived from detailed yes/no performance check lists. In addition it has been shown that while global ratings strongly differentiate between practising   Physicians ,residents, and clinical clerks, detailed checklists  which by their nature reward thoroughness, do not (Hodges 1999).This could be viewed as good news to the examiners, since it indicates that less attention must be paid to the development and critiquing of specific detail checklists and examiners can be given more freedom for judgment in making assessments on OSCE stations.
Pass/ Fail based on the marks awarded may be misguiding let me illustrate that with the following:


The Coffee-Pot Caper
The difference between the norm-referenced and criterion-referenced methods of evaluation was beautifully illustrated some years ago by an example that went like this: Imagine that an objective called for a student to be able to make a pot of cof¬fee, when given all the necessary tools and equipment. A check¬list of each of the steps in the process is prepared, and the student’s performance is then scored on both a norm-referenced and a criterion-referenced basis. Note the difference between the two methods of scoring:

Checklist for Making a Pot of Coffee

 

  Norm- Criterion-
referenced
Criterion - referenced
Scoring Scoring
Disconnects coffee pot 10
Disassembles coffee pot 10
Cleans components and pot 10
Inspects components 10
Fills pot with water 10
Reassembles components 10
Fills basket with coffee 0   x
Reconnects coffee pot 10
Sets dial on coffee pot 10
Turns coffee pot on 90% Not yet competent

 

Note that a norm-referenced approach would allow a stu¬dent to accumulate a score of 90 percent, even though the student failed to make a pot of coffee. Using a criterion-referenced
Approach, on the other hand, the same student would receive a score of zero. By failing to accomplish the objective of being able to make a pot of coffee, the student must judge to be “not yet competent.” (Just last week I failed to put the little coffee pot under the spout of our espresso maker. I made coffee, all right, all over the kitchen counter . . . but my wife refused to give me 90% for my effort. Never mind what she did give for me) Though there are some uses for norm-referenced evalua¬tion, our attention will be focused on criterion-referenced evaluation. When we want to know whether an expectation (objective) or criterion has in fact been achieved, only criterion-referenced procedures are appropriate. (Robert F Mager )

 
Praveen: Cognitive domain or knowledge is a very important part even if we are testing skill or psychomotor domain, as achievement of a skill also requires knowledge.
In OSPE/OSCE at few places observer station is followed by question station becomes contextual where knowledge related to that skill is being tested. We don't have enough time during the procedural station to test the cognitive domain.
Dr Tejinder Singh: Good effort Barathi but seems to have missed the Bull's eye. The coffee pot analogy is a good example but not of criterion or norm referenced testing.
The example you gave is representative of identifying the critical competencies in a task. There can be many examples of this- cleaning the airways before using bag and mask, for example. However, it is not an example of criterion referenced testing (CRT). Similarly, the first row is not norm referenced (NRT).. NRT means, classifying the students in relation to each other. So you end up saying that A did better than B and B did better than C. However, you can not say that A or B knew the task completely. NRT scores are not represented as 90%- rather, they would be represented as 90th percentile! Selection tests are typical examples of NRT.

CRT, on the other hand, is concerned with successful completion of task. In CRT, there are only 2 categories- pass or fail. So everyone in a class can pass or everyone can fail. There are no positions and no ranks. CRT is more appropriate for formative assessment, specially when training for clinical skills.

I already replied to Anupama regarding global checklists, without having read this mail.

Barathi : Thankyou sir I stand corrected-
 
Anshu: I went through your mail and want to applaud your enthusiasm. Somehow I feel that too much hair splitting in the yes/no (binary) checklists dilutes the real message--- except for techniques and tests where it works. In cases where we are testing a good OSCE station, where we are looking for higher problem solving or interpretation, I would prefer global ratings.
From what I notice, almost all the OSCE stations that we have made have tested isolated skills. How does one make stations which test two or three things together? It will be more challenging and creative to design such stations. Like if one tests communication, knowledge and attitude together. Or will this make marking difficult?
 
Anupama: "The eyes see only what the mind knows" So, if skills are there, the student would definitely be having a basic knowledge of them.

 

 

 

Discussion Summary

Introduction

Origin and evolution of OSCE

What is OSCE OSPE..

Important terminologies OSCE and OSPE

Process - OSCE and OSPE

Competencies

Advantages OSCE and OSPE

Limitation OSCE and OSPE

The final words