Understand the concept and importance of outcome measures including the following key terms, Reliability, Validity, Responsiveness
Identify psychometric properties of Outcome measures
Validity
Reliability
Apply criteria for selecting appropriate outcome measures
Evaluate and address challenges in outcome measurement
Assessing Treatment Effectiveness
Comparing Interventions
Monitoring Progress
Objective Data Collection
Communication and Reporting
Regulatory and Funding Requirements
Intra rater reliability
Inter rater reliability
Test retest reliability
Internal consistancy
Reliability statistics, intraclass correlation coefficient (ICC), Weighted Kappa, Kappa
Criterion
Content
Face
Construct
The reliability of a test used in the hands of one rater at two different time points
the reliability of a test used in the hands of two different raters, usually at the same time point or a short while later.
the ability of the test to give the same answer used at two different time points. This is similar to intra-rater reliability but this phrase is used instead if there is no rater - for example if a patient is given a questionnaire to complete. It is evaluated in the same way as intra-rater.
the consistency of results from different items within the same test. This is only relevant for scales that are made up of a number of different items/questions such as those designed to measure constructs.
Construct validity is about whether a test measures a specific idea or concept. It's like making sure a puzzle piece fits into the right spot. There are two types of construct validity: divergent and convergent.
- Divergent Validity: This means that a test measures a different concept than something else it's supposed to be different from. For example, if you have a test that measures how much people like fruits and another test that measures how much people like vegetables, these two tests should show different results because fruits and vegetables are different things.
- Convergent Validity: This means that a test measures the same concept as something else it's supposed to be similar to. For example, if you have a test that measures people's reading comprehension and another test that measures people's writing skills, these two tests should show similar results because reading and writing are related to each other.
So, construct validity is about making sure a test measures what it's supposed to measure, either by showing differences when it should or by showing similarities when it should.
Criterion validity is about whether a test can predict or relate to something else that it should be related to. It's like if you can use something to guess what will happen next. For example, if you see dark clouds in the sky, you can predict that it might rain soon.
Let's say your school wants to find out if a new placement test can predict how well students will do in their math class. They give the test to all the students and then compare the scores with the students' actual math grades. If the test scores match well with the students' grades, then the test has criterion validity because it can predict how well they will do in the class.
Content validity is about whether a test measures all the important things it's supposed to measure. It's like making sure you cover all the important parts of a story. For example, if you want to know how well someone understands a math concept, you would include questions about different aspects of that concept to make sure you're testing their knowledge thoroughly.
Let's say your teacher gives you a math test about fractions. If the test includes questions about adding, subtracting, multiplying, and dividing fractions, as well as questions about fractions in real-life situations, then the test has content validity because it covers all the important parts of the topic.
Face validity is a way to determine if something looks like what it claims to be. It's like judging a book by its cover. For example, if you see a picture of a banana, you can tell it's a banana because it looks like one. So, face validity is about whether something seems to be true or accurate just by looking at it.
Let's say you have a quiz about animals, and the questions are all about different types of animals and their characteristics. If the questions on the quiz look like they're about animals and you can tell they are related to the topic, then the quiz has face validity because it seems to be a good test of your knowledge about animals
Describe the following pictures in terms of reliability and validity. Here are your choices:
Relaible not valid
Valid not reliable
Neither Reliable nor valid
Both reliable and valid
Minimal Important Difference (MID)
Minimal Clinically Important Difference (MCID)
The minimum amount of change in the outcome measure where the patient recognises a benefit.
Effect Size