hypothesis testing

Hypothesis testing is a statistical procedure used to evaluate whether there is enough evidence to reject a proposed statement, or "null hypothesis," about a population parameter. It involves using sample data to make inferences about the characteristics of a larger population.

The null hypothesis is a statement that represents the absence of an effect or relationship between variables. It is typically denoted by "H0" and is assumed to be true unless there is sufficient evidence to reject it. The alternative hypothesis, denoted by "Ha", is the statement that contradicts the null hypothesis and represents the presence of an effect or relationship.

Type I error, also known as a false positive, occurs when we reject the null hypothesis even though it is true. In other words, we mistakenly conclude that there is an effect or relationship between variables when there isn't one. The probability of making a Type I error is denoted by alpha (α) and is typically set at a level of 0.05 or 0.01.

Type II error, also known as a false negative, occurs when we fail to reject the null hypothesis even though it is false. In other words, we fail to detect an effect or relationship between variables when there is one. The probability of making a Type II error is denoted by beta (β) and depends on the sample size, effect size, and the chosen level of alpha.

In hypothesis testing, we use a test statistic (such as a t-test or z-test) and a p-value to determine whether there is enough evidence to reject the null hypothesis. If the p-value is less than the chosen level of alpha, we reject the null hypothesis and conclude that there is sufficient evidence to support the alternative hypothesis. If the p-value is greater than or equal to the chosen level of alpha, we fail to reject the null hypothesis and conclude that there is not enough evidence to support the alternative hypothesis.


References: