Dr. Shear’s research examines the development, interpretation, and use of educational assessments, with a focus on large-scale assessments. These efforts are aimed at supporting more appropriate and fairer uses of tests and assessments in education. To date this research has focused primarily on statistical and methodological issues related to the construction and analysis of standardized test scores, informed by issues arising at the intersection of education policy and psychometrics. One strand of this work has made contributions to the analysis of aggregate proficiency data and is integral to the construction of the Stanford Education Data Archive (SEDA), a publicly available national database created to study educational opportunity in the United States. A second strand of work has evaluated novel approaches for modeling student responses on tests and investigating the properties of statistical models used in state accountability systems. A third strand of work studies the ways test scores are used in educational policy and research.
[Copies of published articles and chapters are available upon request.]
Shear, B.R., & Briggs, D. C. (2024). Measurement issues in causal inference. Asia Pacific Education Review. https://doi.org/10.1007/s12564-024-09942-9 [Full text link: https://rdcu.be/dALaB]
Shear, B. R. (2023). Gender bias in test item formats: Evidence from PISA 2009, 2012, and 2015 math and reading tests. Journal of Educational Measurement, 60(4), 676–696. https://doi.org/10.1111/jedm.12372
Shear, B. R. (2023). Causal inference and COVID: Contrasting methods for evaluating pandemic impacts using state assessments. Educational Measurement: Issues and Practice, 42(1), 99–109. https://doi.org/10.1111/emip.12540
Shear, B. R., & Reardon, S. F. (2021). Using Pooled Heteroskedastic Ordered Probit Models to Improve Small-Sample Estimates of Latent Test Score Distributions. Journal of Educational and Behavioral Statistics, 46(1), 3–33. https://doi.org/10.3102/1076998620922919
Fahle, E. M., Shear, B. R., & Shores, K. A. (2019). Assessment for monitoring of education systems: The U.S. example. The ANNALS of the American Academy of Political and Social Science, 683(1), 58–74. https://doi.org/10.1177/0002716219841014
Stout, W., Henson, R., DiBello, L., & Shear, B. R. (2019). The Reparameterized Unified Model system: A diagnostic assessment modeling approach. In M. von Davier & YS Lee (Eds.), Handbook of Diagnostic Classification Models (pp. 47–79). Springer. https://doi.org/10.1007/978-3-030-05584-4_3
Shear, B. R. (2018). Using hierarchical logistic regression to study DIF and DIF variance in multilevel data. Journal of Educational Measurement, 55(4), 513–542. https://doi.org/10.1111/jedm.12190
Shear, B. R., Nordstokke, D. W., & Zumbo, B. D. (2018). A note on using the nonparametric Levene test when population means are unequal. Practical Assessment, Research & Evaluation, 23(13), 1–11. https://doi.org/10.7275/bwvg-d091
Reardon, S. F., Shear, B. R., Castellano, K. E., & Ho, A. D. (2017). Using heteroskedastic ordered probit models to recover moments of continuous test score distributions from coarsened data. Journal of Educational and Behavioral Statistics, 42(1), 3–45. https://doi.org/10.3102/1076998616666279
Shear, B. R., & Roussos, L. A. (2017). Validating a distractor-driven geometry test using a generalized diagnostic classification model. In B. D. Zumbo & A. M. Hubley (Eds.), Understanding and investigating response processes in validation research (Vol. 69, pp. 277– 304). Springer International Publishing. https://doi.org/10.1007/978-3-319-56129-5_15
Shear, B. R., & Zumbo, B. D. (2013). False positives in multiple regression: Unanticipated consequences of measurement error in the predictor variables. Educational and Psychological Measurement, 73(5), 733–756. http://doi.org/10.1177/0013164413487738