While many different types of distributions exist (e.g., normal, binomial, Poisson), working with SEM generally only needs to distinguish normal from non-normal distributions.
PLS-SEM is a nonparametric statistical method. Different from maximum likelihood (ML)–based CB-SEM, it does not require the data to be normally distributed.
However, it is nevertheless worthwhile to consider the distribution when working with PLS-SEM.
Refers to the shape of data distribution for individual variable.
If variation from normal distribution is large, all resulting statistical tests are invalid, because F & t-statistics assume normality (Hair et al., 2010).
Normality can have serious effects in small samples (<50), but the impact effectively diminishes when sample sizes > 200.
Histogram: Compare the observed data values with a distribution approximating normal distribution.
Normal Probability Plot: Compare the cumulative distribution of actual data values with the cumulative distribution of a normal distribution.
Skewness and Kurtosis Statistics.
Shapiro-Wilks (sample < 2000).
Kolmogorov-Smirnov (sample > 2000).
The degree of symmetry in the variable distribution.
Threshold: -2 ≤ skewness ≤ 2 (Curran et al., 1996; West et al., 1995; Gliselli et al., 1981).
Perfectly symmetrical distribution
The degree peakedness/flatness in the variable distribution.
Threshold: -7 ≤ Kurtosis ≤ 7 (Curran et al., 1996; West et al., 1995).
Normal distribution
Mesokurtic Distribution
High degree of peakness
Leptokurtic Distribution
Low degree of peakness
Platykurtic distribution
The multivariate normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions.
To test for multivariate normality, please click here.
The expected Mardia’s skewness is 0 for a multivariate normal distribution and higher values indicate a more severe departure from normality.
According to Bentler (2005) and Byrne (2010), the critical ratio value of multivariate kurtosis should be less than 5.0 to indicate a multivariate normal distribution.
Check and remove outlier cases.
Remove non-normal item from the model.
Bootstrapping (i.e., re-sampling process in the existing data-set with replacement).
Distinctly different observation from the others.
Examines distribution of observations for each variable and selects as outliers those cases falling at the outer ranges (high or low) of the distribution.
Relates individual independent variable with individual dependent variable.
Evaluates the position of each observation compared with the center of all observations on a set of variable.
To test for multivariate outliers, Hair et al. (2010) and Byrne (2010) suggested to identify the extreme score on two or more constructs by using Mahalanobis distance (Mahalanobis D2). It evaluates the position of a particular case from the centroid of the remaining cases. Centroid is defined as the point created by the means of all the variables (Tabachnick & Fidell, 2007).
Based on a rule of thumb, the maximum Mahalanobis distance should not exceed the critical chi-square value, given the number of predictors as degree of freedom. Otherwise, the data may contain multivariate outliers (Hair, Tatham, Anderson, & Black, 1998)