- The context scale in the original ORCA was scored on a Likert agreement/disagreement scale. In prior work we found the site-level context scores exhibited limited variation across sites that we thought might be the result of leniency bias (a tendency to respond with moderate agreement), and a colleague suggested that a scale based on observed behaviors was less susceptible to leniency bias than questions asking for agreement or disagreement with opinions. In the revised ORCA, the context scale is still scored on a 5-point scale, but it is the frequency of observed behaviors. Ideally, a time frame (e.g., in the past month, or past 12 months) would be given.
- In terms of scoring, the idea is that the ORCA produces three scores, one for each scale (Evidence, Context and Facilitation), but that also depends on what you’re using the ORCA for.
- We generally calculate an average score for each subscale and then each scale for each individual respondent per site, and then create site averages.
- In the original ORCA there are 3 questions that need to be reverse-scored before averaging the scales—these are not in the revised ORCA that my colleagues are using in the ACC study: Q3d ("Practice change is experimental, but may improve patient outcomes"), Q3e (“Practice change likely won't make much difference in patient outcomes”) and Q4d (“Practice changes have not been attempted in this clinical setting”). Reverse scoring can be achieves by subtracting 6 from the item score and multiplying by -1. E.g., if the item is 5, to reverse score one calculates: (5-6)*(-1)=1. NOTE: from our original validation work published in ’09, we proposed dropping items Q3d, Q3e, Q4d, and Q5a-Q5d from the original ORCA as they’ve exhibited poor reliability in past assessments of psychometrics.
- For missing values we recommend calculating subscales from non-missing values where half or more of the items are completed. If an observation has more than half of items for a given subscale missing, do not use that observation to calculate the subscale or corresponding scale. If the ORCA used has a “don’t know/not applicable” option, which we had included at one point, those answers are considered “missing” for the purpose of scoring.