<<this website is under construction>>
There are many famous misnomers in statistics and probability theory, including several classics:
A random variable is neither random nor a variable. (Wilks; Mood)
The best estimate is not best, and it is often not even good enough.
You shouldn't expect the expected value. (2.3 children?)
Probability density is not a probability. (It can be bigger than one.)
The standard deviation is not standardized. (Two different formulas, two versions of the statistic: population and sample.)
Statistical independence among subsets of a finite set implies interdependence among them.
Regression has nothing to do with regression (going backward); this was from a misunderstanding of an application for a method (see discussion about Galton/Pearson at https://priceonomics.com/the-discovery-of-statistical-regression/).
Misnomers create confusion and foment broader misunderstanding, and if unchecked they can degenerate into cognitive dissonance and mistakes.
Contributions and commentary are welcome. Please send to admin @ ramas . com.
References
Smith, C.P. (1987). Statistical Independence—a misnomer. Mathematical Modelling 8: 183-185. Abstract: This paper examines statistical independence in finite sets. By considering the structure of sets which are statistically independent, it concludes that statistical independence of subsets of a set implies considerable structural interdependence in the sub-sets.
Kopf, Dan. The Discovery of Statistical Regression. Priceonomics. https://priceonomics.com/the-discovery-of-statistical-regression/
We now refer to this phenomenon that Galton discovered as regression to the mean. If today is extremely hot, you should probably expect tomorrow to be hot, but not quite as hot as today. If a baseball player just had by far the best season of his career, his next year is likely to be a disappointment. Extreme events tend to be followed by something closer to the norm.
“Regression” came to be associated with the least squares method of prediction by the late 1800s. Karl Pearson, among the founders of mathematical statistics and a colleague of Galton’s, noticed that if you plotted the height of parents on the x-axis and their children on the y-axis, the line that best fit the data according to least squares had a slope of less than one. A slope of less than one is essentially the mathematical representation of “regression to the mean.” Pearson referred to this slope on a graph as the “regression line.” And thus the method of least squares and regression became somewhat synonymous.