Simple regression

The Regression Equation

Thus far, we have talked about how we use a best fit line to approximate a relationship between two variables. We call this relationship a correlation and we can plot correlations on scatterplots. However, we have learned that the data rarely fit a straight line exactly. Our best fit line is the line that best represents the relationship, but if we were to use that line for predictions, we must be satisfied with making some errors. To find the Line of Best Fit we use a technique called simple regression.

Consider the image below. Let's imagine that our x variable measures participant scores on Exam 3 and our y variable measures participant scores on a Final Exam. As you can see from the data, we do not have a participant (see the blue dots) that scored 70 on our Exam 3 (look at the x-axis). What if we wanted to predict how a person who scored a 70 on the Exam 3 would do on the Final Exam? To make this prediction, we can use linear regression to find a best fit line. Once we find the line of best fit, we can use the line to find the predicted value of Y for a the given X value -- in this case: 70!

EXAMPLE 1a

A random sample of 11 statistic students produced the following data and scatterplot, where x is the third exam score out of 80, and y is the final exam score out of 200. Do you think you could predict the final exam score of a random student if you know their third exam score?


The third exam score, x, is the independent variable and the final exam score, y, is the dependent variable. We will plot a regression line that best “fits” the data. If each of you were to fit a line “by eye,” you would draw different lines. We can use what is called a least-squares regression line to obtain the best fit line.

Consider the following diagram. Each point of data is of the the form (x, y) and each point of the line of best fit using least-squares linear regression has the form (xy').

The y' is read as y prime and is sometimes seen as y hat (y with a ^ for a hat!). The y prime (y') is the estimated value of y. It is the value of y obtained using the regression line. It is not generally equal to y from data.

The above term where we find the difference between the observed value of y and the predicted value of y is called the “error” or residual. It is not an error in the sense of a mistake. The absolute value of a residual measures the vertical distance between the actual value of y and the estimated value of y. In other words, it measures the vertical distance between the actual data point and the predicted point on the line.

If the observed data point lies above the line, the residual is positive, and the line underestimates the actual data value for y. If the observed data point lies below the line, the residual is negative, and the line overestimates that actual data value for y.

In the diagram above, the residual equation (y - y') is the residual for the point shown. Here the point lies above the line and the residual is positive.

ε = the Greek letter epsilon

For each data point, you can calculate the residuals or errors.

For the example about the third exam scores and the final exam scores for the 11 statistics students, there are 11 data points. Therefore, there are 11 ε values. If you square each ε and add, you get:

This is called the Sum of Squared Errors (SSE).

Using calculus, you can determine the values of a and b that make the SSE a minimum. When you make the SSE a minimum, you have determined the points that are on the line of best fit. It turns out that the line of best fit has the equation:

Least Squares Criteria for Best Fit

The process of fitting the best-fit line is called linear regression. The idea behind finding the best-fit line is based on the assumption that the data are scattered about a straight line. The criteria for the best fit line is that the sum of the squared errors (SSE) is minimized, that is, made as small as possible. Any other line you might choose would have a higher SSE than the best fit line. This best fit line is called the least-squares regression line.

EXAMPLE 1b


The graph of the line of best fit for the third-exam/final-exam example (Example 1a) is as follows:

The least squares regression line (best-fit line) for the third-exam/final-exam example has the following equation:

ŷ = -173.51 + 4.83x

Remember, it is always important to plot a scatter diagram first. If the scatter plot indicates that there is a linear relationship between the variables, then it is reasonable to use a best fit line to make predictions for y given x within the domain of the x values in the sample data, but not necessarily the x values outside of that domain. You could use the line to predict the final exam score for a student who earned a grade of 73 on the third exam. You should not use the line to predict the final exam score for a student who earned a 50 on the third exam, because 50 is not within the domain of x-values in the sample data, which range from 65-75.

Understanding Slope

The slope of the line, b, describes how changes in the variables are related. It is important to interpret the slope of the line in the context of the situation represented by the data. You should be able to write a sentence interpreting the slope in plain English.

Interpretation of the Slope: The slope of the best-fit line tells us how the dependent variable (y) changes for every one unit increase in the independent (x) variable, on average.

Third Exam vs Final Exam Example: Slope: The slope of the line is b = 4.83.

Interpretation: For a one-point increase in the score on the third exam, the final exam score increases by 4.83 points, on average.

Understanding Average Prediction Error

Just as we wanted to compute and understand the variability of a distribution by calculated the average deviations in the data from the mean, we would also like to have a measure of average error of our prediction line. In order to compute the average error of prediction we would be computing the average deviations of the data from the regression line - we are computing standard deviation of the regression line! Previously we had computed the standard deviation for the mean, now we can compute the standard deviation for the regression line, and we interpret these deviations as errors in prediction. Besides an interpretation difference where deviations are errors, there is one computational difference in that the denominator is N-2 rather than N or N-1. We need to use N - 2 rather than N-1 because with the regression line two parameters (the slope and the intercept) were estimated in order to estimate the sum of squares.

This is standard deviation of the regression line is called the standard error of the estimate and is computed below. Note in the equation we are computing the sums of squares (SS) for y relative to the regression line y' and dividing by N-1, which is an average SS giving us variance. In order to get the standard error of the estimate then we take the square root of the variance of the error of the estimate:


This will give us an interpretable value to understand how much error in prediction we should expect, on average, with our regression line. Which should aid in our evaluation, use and reliance (or not) on the regression equation.

References:

  1. https://courses.lumenlearning.com/introstats1/chapter/the-regression-equation/

CC LICENSED CONTENT, SHARED PREVIOUSLY