Instructors typically map noisy exam scores on [0,100] into coarse letter grades A, ..., E. This coarsening acts as a noise filter for transcript readers, but it inevitably risks misclassification. We treat grade assignment as a statistical classification problem: mapping observed scores to latent student types. We introduce a novel grading system, denoted \emph{Bayesian Adaptive Grading} (BAG). BAG holds priors over student type distributions and exam difficulty, and propagates parameter uncertainty into final grade assignments via Bayes-optimal decisions under standard loss functions.
We derive optimal cutoffs for Fixed-Scale Grading (FSG), a commonly used benchmark in which instructors pre-commit to numerical cutoffs, and we compare BAG against both FSG and curve-based grading in Monte Carlo simulations. Across a wide range of misspecification regimes, BAG substantially outperforms both alternatives: over a variety of robustness checks, BAG reduces mean misclassification by roughly 55 - 72\% relative to FSG and 45 - 68\% relative to curving, delivers median misclassification between 0\% and about 3\%, and achieves median error reductions of roughly 66 - 100\% relative to both benchmarks. Curving performs particularly poorly because it fails to model the mixture structure of student types.
Finally, we show how BAG performs on real classroom data, and how informative priors on grade distributions can be used to encode institutional targets while allowing data-justified deviations. BAG thus improves classification accuracy, mitigates grade inflation, and protects students against instructor mistakes in a way that is transparent and statistically rigorous.