If you have a scale with multiple items, you can conduct a reliability analysis to look at whether the items consistently measure the same construct.
If you have a pool of items (not necessarily from a known scale) and you would like to explore what construct(s) is(are) behind those items, or you want to develop a new scale, you may want to conduct Exploratory Factors Analysis (EFA).
EFA helps you extract potential factor(s) underlying multiple measurements. The key assumption is that the measurements (usually of a larger number, say, 10 or 20) may be driven by a smaller number of hidden factors (e.g., say 2, 3, or 4). The goal of EFA is to extract, based on the shared variabilities among the measurements, those hidden factors.
For example, if the dimension captures a great amount of variation of data and all items get loaded on a factor (like, loading > .3), we may conclude that those items more likely belong to a single dimension (1-factor solution).
EFA is a data-driven approach that is commonly used in scale development. We are going to explore and explain if there is a latent dimension (i.e., factor) behind a group of variables. Specifically, we ask
Can we extract factor(s) from the item pool? (by conducting the principle components analysis, PCA or the extraction method)
If so, how many factors can be extracted? (by evaluating the eigenvalue)
What does (do) the factor(s) look like? Which items are loaded on which factor(s)? (by choosing factor rotation and then interpreting the post-rotation solution)
Assuming Sophie doesn't believe in the original Big Five Model (the 5-factor solution from the item pool) and would like to explore any potential, alternative factor solution from the given list of items.
Q: How many factor(s) can be extracted from the current BFM item pool? Are those items passed the EFA assumption? How to interpret the result?
A: Perform EFA in jamovi, by checking the assumptions and additional output specified as the video
Results Interpretation
(assumption part) Both Bartlett’s test of sphericity and KMO measure of sample adequacy were employed to assess the appropriateness of the current sample for performing exploratory factor analysis. Bartlett’s test of sphericity was significant, p < .001, indicating some inter-item correlations were not zero. The overall KMO measure was .65, indicating a marginal to middle adequacy.
The maximum likelihood method with Oblimin rotation was selected for the factor extraction and interpretation, respectively. The number of factors extracted was based on Eigenvalues (greater than 1) and the point of inflexion in the scree plot. The cut-off for loading of items was chosen as 0.4*; items with values <0.4 were excluded from the final analysis.
Three factors were extracted based on the criteria. Factor 1 was comprised of 5 items reported on a 5-point Likert scale that explained 16% of the variance with factor loadings from |.45| to |.86|. Factor 2 was comprised of 5 items that explained 14% of the variance with factor loadings from |.46| to |.92|. Factor 3 was comprised of 4 items that explained 12% of the variance with factor loadings from |.70| to |.72|.
* in the video, we hid landings below 0.3. To avoid interpreting double-loaded items and to make the result more interpretable, we adopted a more rigour cut-off and only interpret heavily loaded items (i.e., > 0.4)
We can also extract factors with alternative methods, such as parallel analysis or fixed factors (with special reasons) or factor rotations (Varimax vs Oblimin).
The most commonly used methods for factor rotations are Varimax (from the orthogonal rotation category) and Oblimin (from the oblique rotation category)