Up-to-date thoughts from our newsletter. Please share!

Blog

Table of Contents

The Problem with Typical Assessment Practices 

by Joshua A. Taton, Ph.D. | September 6, 2023 | 3 min read

In the image to the left, consider the question, student's response, and teacher's evaluation. This image, to me, represents many of the problems with assessments that are typical in today's educational landscape.  

The image to the left appeared recently on my social media feed. I don't know it's origin, but I obtained it from @teacherman91 with the caption "Full or partial credit?" (For the record, I would grant "full credit.")

I wish that I had more information about the image, because so many questions come to mind:

By the same token, it doesn't really matter. 

The purpose behind the question is clear. The curriculum writer obviously intended for students to think not only about fractional amounts (portions of totals) but also absolute quantities (overall magnitudes), otherwise the question would have no purpose.

Further, the curriculum writer expected the students to provide full explanations—justifying their reasons—and therefore provided enough space for them to do so.

I could certainly spend time thinking about how and why the teacher appears to lack sufficient pedagogical content knowledge (PCK). PCK is the awareness of mathematics in the context of instruction—namely, here, the purpose of this question, the relationship of the question to the lesson and unit, the common mathematical misunderstandings exhibited by students, etc. Sufficient PCK should have led the teacher to conclude that this question wasn't asking, purely, about fractional amounts.

Or, perhaps, I could speculate on the teacher's apparent gap in subject-matter knowledge (SMK). SMK involves common (and valid) understanding of mathematics concepts, skills, and applications that a non-teacher would be expected to have. In this case, it's possible that the teacher doesn't have a sufficient understanding of fractional amounts (relative magnitudes) and their connection to proportionality or absolute quantities.

It's easy, perhaps, to criticize the teacher. Instead, though, I want to criticize the system. 

I want to discuss why typical assessment practices—magnified, in my view, by the scores of multiple-choice "videogame-style" software platforms that exist on the market and that litter school buildings—can undercut what appears to be a wonderful opportunity for conceptual-based learning.

We've come to understand mathematics—both as a discipline and in its teaching and learning—as something that involves "right" or "wrong" only. It's an either-or

Note the big "X" on the left side of the student's response, although it's nice to see a pen with different color ink, used here, than the typical red pen. With rare exceptions*, today's math software platforms, intended to drill students into providing correct answers as preparation for standardized tests, simply feed one-size-fits-all, click-the-answer questions over and over and over.

We've also come to expect teachers to hold all the knowledge and to be evaluators (determining "correctness") of students' work and thinking.

And don't even get me started on the uselessness and unfairness of broad-brush letter grades and commonly used grading practices. (If you're curious, I'll simply redirect you to the research, summaried well by Joe Feldman, because debating the merit of the traditional letter-grade system isn't a debate that will leave anyone feeling positive afterwards.)

Alternately, though, what if we encouraged teachers to be curious interlocutors?

The immediate response, here, is that the teacher—for whom I have sympathy, certainly having committed similar errors in the past until I knew better—feels compelled to explain the so-called "correct" approach to the student. In so doing, the student: a) has had their own thinking invalidated, and b) never has the chance to confront the reasonableness or unreasonableness of their own thinking.

And, in so doing—over the course of years and years of schooling—we create a passive constituency, always looking to some "authority figure" (teacher, professor, supervisor, politician) for the "truth."

Rather than knowing the truth ourselves.

My guess is that this student, faced with a number of other in-school experiences that look and feel like this one, will no longer be inclined to think outside the box or to make sense of content themselves.

But, having the curiosity drummed out of them, will just give the teacher what they expect the teacher wants. They will learn the code.

Here's another way this could have gone: The teacher could have simply written (without the glaring "X"), "Tell me more." Or "Why do you think that way?" Or, simply, "Why?"

Maybe the student's response could have been a "featured answer" in the next day's lesson, one that would have led to a productive and conceptual-oriented class discussion. The students and teacher could have explored the differences between fractional amounts (relative to the size of a whole) and absolute amounts (total, overall quantities).

Such a discussion would have illuminated many relevant ideas about fractions and how they are usefuland also when their use might be inappropriate.

But, without such a dialogueand without professional learning opportunities and in-school cultures that place teachers on a more level playing-field as their students, as co-learners, as question-askers, as guides-on-the-side—we are left with an unsettled feeling.

We feel uncomfortable about the teacher and the instruction being provided. Really, though, we should empathize with the inevitable pain that will be felt by this student. 

Whose creativity and insights have just been stifled.

I welcome your thoughts.

* In my view, three noteworthy counter-examples to the low-inference, low-rigor mathematics software programs that riddle the educational market are: Desmos Classroom, ST Math, and Julia Robinson Math Festival's (JRMF's) puzzle pages. There are a handful of other examples, too, but the overall supply of conceptual-oriented online learning software is sparse.