MRES Top 50-ish

We compiled a list of the "Top" readings that form the basis of our interests. You may have other noteworthy additions but these are ours and ours alone. For those interested in joining MRES, we encourage you to whittle down the unread bits listed below; others disinterested in MRES may do the same and we hope all find these resources enlightening. Thanks for reading. Now, to our list:

General Science (N=9)

  1. Campbell, D. T. (1960). Blind variation and selective retentions in creative thought as in other knowledge processes. Psychological review, 67(6), 380.
  2. Chamberlin, T.C. (1965). The method of multiple working hypotheses. Science, 148, 754-759.
  3. Cronbach, Lee J. (1957). The two disciplines of scientific psychology. American Psychologist, 12, 671-684.
  4. Einhorn, H.J., & Hogarth, R.M. (1986). Judging probable cause. Psychological Bulletin, 99, ~19.
  5. Meehl, P. E. (1973). Why I do not attend case conferences. Psychodiagnosis: selected papers, 225-302.
  6. Pearl, J. (2009). Causal inference in statistics: An overview. Statistics surveys, 3, 96-146.
  7. Platt, J.R. (1964). Strong inference. Science, 146(3642), 347-352.
  8. Scriven, M. (1976). Maximizing the power of causal investigations: The modus operandi method. Evaluation studies review annual, 1, 101-118.
  9. White, P.A. (1990). Ideas about causation in philosophy and psychology. Psychological Bulletin,108, ~18.

Measurement (N=10)

  1. Blanton, H., & Jaccard, J. (2006). Arbitrary metrics in psychology. American Psychologist, 61(1), 27.
  2. Bond, T. G., & Fox, C. M. (2013). Applying the Rasch model: Fundamental measurement in the human sciences. Psychology Press.
  3. Campbell, D. T. (1955). The informant in quantitative research. American Journal of sociology, 60(4), 339-342.
  4. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological bulletin, 56(2), 81.
  5. Chapman, L. J., & Chapman, J. P. (1978). The measurement of differential deficit. Journal of Psychiatric Research, 14(1-4), 303-311.
  6. Cronbach, Lee J. & Meehl, Paul E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.
  7. MacCorquodale, Kenneth & Meehl, Paul E. (1948). On a distinction between hypothetical constructs and intervening variables. Psychological Review, 55, 95-107
  8. Sechrest, L., McKnight, P., & McKnight, K. (1996). Calibration of measures for psychotherapy outcome studies. American Psychologist, 51(10), 1065.
  9. Thompson, B., & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reliable. Educational and Psychological Measurement, 60(2), 174-195.
  10. Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1999). Unobtrusive measures (Vol. 2). Sage Publications.

Research Methodology (N=12)

  1. Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological bulletin, 54(4), 297.
  2. Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental designs for research. Ravenio Books.
  3. Cook, T. D., Campbell, D. T., & Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
  4. Diamond, J. (1989). How cats survive falls from New York skyscrapers. Discover, No.8, 20-26.
  5. Goodman, S.N., & Royall, R. (1988). Evidence and scientific research. American Journal of Public Health, 78, 1568-1574.
  6. Gusfield, J. (1976). The literary rhetoric of science: Comedy and pathos in drinking driver research. American Sociological Review, (February), 1~34.
  7. Houts, A.C., Cook, T.D., & Shadish Jr., W.R. (1986). The person-situation debate: A critical multiplist perspective.Journal of Personality, 54(1), 5~105.
  8. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology, 66(5), 688.
  9. Shadish, W.R., Jr. (1986). Planned critical multiplism:Some elaborations. Behavioral Assessment, 8, 75-103.
  10. Temoshok, L. (1989). On cause, prediction, and related conundrums: Logical and methodological musings. In L. Sechrest, H. Freeman, & A. Mulley eds.), Health services research methods: a focus on AIDS. Rockille, MD: National Center for Health Services Research and Technology Assessment.
  11. Thistlethwaite, D. L., & Campbell, D. T. (1960). Regression-discontinuity analysis: An alternative to the ex post facto experiment. Journal of Educational psychology, 51(6), 309.
  12. Toedter, L.J., Lasker, J.N., & Campbell, D.T. (1990). The comparison group problem in bereavement studies and the retrospective pretest. Evaluation Review, 14, 75-90.

(Program & Policy) Evaluation (N=7)

  1. Campbell, D. T. (1969). Reforms as experiments. American psychologist, 24(4), 409.
  2. Chen, H. T., & Chen, H. T. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Sage.
  3. Chen, H.T., & Rossi, P.H. (1987). The theory-driven approach to validity. Evaluation and Program Planning, 10,95-103.
  4. Lipsey, M.W. (1990). Theory as method: Small theories of treatments. In L. Sechrest, E. Perrin, & J. Bunker eds.),Research methodology: Strengthening causal interpretations of nonexperimental data (pp. 33-50). Washington, D.C.: Dept. of Health and Human Services.
  5. Sechrest, L., & Yeaton, W.E. (1981). Assessing the effectiveness of social programs: Methodological and conceptual issues. New Directions for Program Evaluation, 9, 41-56.
  6. Sechrest, L., West, S.G., Phillips, M.A., Redner, R., & Yeaton, W.E. (1979). Some neglected problems in evaluation research: Strength and integrity of treatments. In L. Sechrest et al. (Eds.), Evaluation studies review annual, Vol. IV,(pp.15-35). Beverly Hills: Sage Publications.
  7. Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Sage.

Statistics (Data Analysis) (N=14)

  1. Cohen, J. (1990). Things I have learned (so far). American psychologist, 45(12), 1304.
  2. Cohen, J. (1992). A power primer. Psychological bulletin, 112(1), 155.
  3. Cohen, J. (2016). The earth is round (p<. 05). In What if there were no significance tests? (pp. 69-82). Routledge.
  4. Ehrenberg, A. S. C. (1977). Rudiments of numeracy. Journal of the Royal Statistical Society: Series A (General), 140(3), 277-297.
  5. Freedman, D. A. (1991). Statistical models and shoe leather. Sociological methodology, 291-313.
  6. Guttman, L. (1977). What is not what in statistics. The Statistician, 26(2), 81-107.
  7. Holland, P.W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945-970.
  8. Meehl, P.M. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, Monograph Supplement 1,66, 195-244.
  9. Richards Jr., J.M. (1982). Standardized versus unstandardized regression weights. Applied Psychological Measurement, 6(2), 201-212.
  10. Rodgers, J.L., & Nicewander, W.A. (1988). Thirteen ways to look at the correlation coefficient. The American Statistician, 42, 59-66.
  11. Rogosa, D., Brandt, D., & Zimowski, M. (1982). A growth-curve approach to the measurement of change.Psychological Bulletin, 92, 72-748.
  12. Wainer, H. (1976). Estimating coefficients in linear models: It don't make no nevermind. Psychological Bulletin, 83(2), 213.
  13. Wainer, H. (1984). How to display data badly. The American Statistician, 38(2), 137-147.
  14. Wainer, H., & Thissen, D. (1981). Graphical data analysis. Annual review of psychology, 32(1), 191-241.

Worthy of more attention

Ho, Y. S., & Hartley, J. (2016). Classic articles in psychology in the Science Citation Index Expanded: a bibliometric analysis. British Journal of Psychology, 107(4), 768-780.