Hypothesis Testing
Hypothesis testing uses statistics to determine the probability that a proposed hypothesis is true. Researchers create a null hypothesis (pure chance) and an alternative hypothesis (chance variation). Researchers must determine if they will accept or reject the null hypothesis in favor of the alternative hypothesis. The p-value is computed and used to provide evidence against the null hypothesis. The lower the p-value, the greater the evidence against the null hypothesis.
P-value Summary:
Alpha Level
The Alpha level is the confidence level determined by the researcher. The higher the confidence level typically set at alpha level 5% means the researcher is 95% confident in his research.
P-value Compared to the Alpha Level
A small p-value that is less than or equal to the alpha level (p ≤ 0.05) signifies strong evidence against the null hypothesis which is then rejected for the alternate hypothesis.
A large p-value greater than the alpha level (p > 0.05) signifies weak evidence and the null hypothesis is not rejected.
Scenario
70 elementary students are randomly chosen to participate in receiving their mathematics instruction through blended learning. The control group (traditional instruction) is comprised of n=35 students. The experimental group (blended learning) is comprised of n=35 students. The content and mid-term exam are the same in both the experimental and control groups. The researcher is interested in determining if there is a statistical difference when comparing the mid-term exam results of both groups.
Null Hypothesis
Ho: μ1 = μ2
Ho: There is no significant difference between mid-term exam results of the control and experimental groups.
Alternate Hypothesis
H1: μ1 ≠μ2
H1: There is a significant difference between the mid-term exam scores of the control and experimental groups.
Significance Level
α = 0.05
Decision
If it is determined that the p-value is less than (α = 0.05), the null hypothesis will be rejected and the alternate hypothesis will be accepted.
Data Analysis
Group Statistics
Group N Mean Std. Deviation Std. Error Mean
Results A 35 75.4286 9.87953 1.66994
B 35 81.4286 8.95835 1.51424
T-Test Results
What happens next can be frightening. SPSS does a great job in generating the statistics; however; you have to know how to read the appropriate row within the box. You begin by looking at Levene’s Test for Equality of Variances. This test determines the amount of variability between the two conditions to decide if they are the same or different. Next, we focus our attention on the columns labeled F and Sig. (p-value). Look at the Sig column. The p-value found in the Sig column will determine which row to read from.
In our example, Sig has a (p-value) of .708 for the F-Test. If the value of Sig is greater than the alpha 0.05, you read from the top row. On the contrary, if the value is less than or equal to alpha 0.05, you read from the bottom row. In this example Sig. (p-value) equals .708 which is greater than alpha 0.05 indicating that variability in the two scores is about the same. With the correct row identified we can now look at the results of the T-test. The T-test results will show if there is a statistical difference in the mean scores of the two groups. T= -2.662 the p-value for this t-test equals 0.010. This can be found under the column Sig. (2-Tailed). Because 0.010 < 0.05 we can reject the null hypothesis.
Conclusion
Based on the results of the T-test and p-value we can conclude that there is a statistically significant difference between the control group A and the experimental group B. The Sig. (2-Tailed) p-value is 0.010, which is less than 0.05. We can reject the null hypothesis and accept the alternate hypothesis that the difference in-group mean scores is not by chance and can be attributed to students receiving instruction through blended learning.
References
Salkind, N. J. (2017). Statistics for people who (think they) hate statistics (6th ed.). Washington, DC: Sage Publications.