In behavioral research, some measurement tools are more reliable (i.e., consistent) than others.
Reliability refers to the extent to which the same results can be obtained across different contexts.
When it comes to a test that is implemented at different time points under similar situations with similar results, we called it test-retest reliability.
When it comes to the reliability of a multi-item scale, we hope each item is similar to other items, namely, internal consistency (or simply, reliability of a scale). Such similarity can be tested by inter-item correlation, and we use Cronbach's α to quantify the level of internal consistency/inter-item correlation.
When we develop a new scale or apply an established scale to measure a construct, we want to make sure it is a reliable measure from the collected data. If the items are measuring the same construct, the responses should be similar, in other words, consistent among items. If the responses are dissimilar, these items might not be a reliable measure of the construct.
To quantify the extent to which the items in a scale are consistently measuring the same construct, we can use a reliability coefficient. A commonly used one is Cronbach's α (alpha).
There are some commonly accepted description of a range of Cronbach's α.
Cronbach's alpha Internal consistency
0.9 ≤ α Perfect
0.8 ≤ α < 0.9 Excellent
0.7 ≤ α < 0.8 Good
0.6 ≤ α < 0.7 Acceptable
0.5 ≤ α < 0.6 Questionable
α < 0.5 Unacceptable
When we develop a new scale, or apply an established scale to measure a construct, we want to make sure it is a reliable measure from the collected data. If the items are measuring the same construct, the responses should be similar, in other words, consistent among items. If the responses are dissimilar, these items might not be a reliable measure of the construct.
To ensure whether the items in a scale are consistently measuring the same construct, we use internal reliability, or internal consistency. Cronbach's α (alpha) is the most commonly-used measurement for quantifying internal reliability.
For example, we want to know whether the 9-items in the Irrational Procrastination Scale (IPS) are consistently measuring the same construct, procrastination. We will need to do an analysis on the collected data to test the internal reliability (Cronbach’s α).
Q: How do we measure the internal reliability of a scale in jamovi?
A: We use the “Reliability Analysis” in the “Factor” module.
Another example would be the Basic Self-Control (BSC) Scale. We want to know whether the 13-items in the BSC are consistently measuring the same construct, self-control. We will need to do an analysis on the collected data to test the internal reliability (Cronbach’s α).
But the internal reliability (Cronbach’s α) of the BSC is -0.0508, which is considered unacceptable. The main reason is that we included reverse-coded items that had not yet been reversed.
Q: How do we correct the reliability analysis with the reverse-coded items?
A: We use the “Reverse Scaled Items” in the reliability analysis to correct it. Then we find that the Cronbach’s α is 0.67.
Now, if you think you're ready for the exercise, you can check your email for the link.
Remember to submit your answers before the deadline in order to earn the credits!