estimate of plausible values for the population mean (not sample mean)
misconception:
For two independent groups, the means are statistically significantly different at p < .05 when the 95 percent CIs around the means are just touching.
truth:
When CI overlap by about ¼ (half the margin of error), p ≈ 0.05 for CIs for independent groups.
When CIs are just touching (no overlap), p ≈ 0.01. (Cumming and Finch, 2005)
This, however, does not apply to repeated or paired design statistics. For repeated or paired designs, you should create a CI around the difference between the groups.
misconception:
95% CI means sample means of retests have 95% chance of falling within the interval.
truth:
This would only be true if the initial sample mean landed directly on the population mean. The average probability of the first 95% CI capturing the next sample mean is around 83% (Cumming, Williams, & Fidler, 2004).
misconception:
Large correlations for small samples mean that the effect has to be true because the sample size is so small.
truth:
Just as the chances of not detecting an effect (Type II error) is higher in small samples, true effect size of detected effects is overestimated with small samples (Button et al. 2013) because only large effects can be detected in small samples, so large correlations are not because variables have stronger correlation but that the correlation coefficient will always be larger with a small sample size. Also see code to run a simulation demonstrating this here.
PMID: 23571845
Cumming G & Finch S. (2001). A primer on the understanding, use and calculation of confidence intervals based on central and noncentral distributions. Educational and Psychological Measurement, 61, 530-572.
Cumming G & Finch S. (2005). Inference by eye: Confidence intervals, and how to read pictures of data. American Psychologist, 60, 170–180.
Cumming G, Williams J, & Fidler F. (2004). Replication and researchers’ understanding of confidence intervals and standard error bars. Understanding Statistics, 3, 299–311.