Let's Talk about Assessment (and Bruno)

By: John Koolage, Professor of Philosophy and Director of General Education

Wow! Still reading? Assessment is not a concept that really draws, nor holds, the attention. I have the, perhaps reasonable, fear of being the person who provided the dullest blog post ever, but there is a crisis in assessment. This crisis is a direct result of lack of clarity. While the side eye is certainly warranted by some uses of the term, it is not in others. Here, I hope to make some distinctions that will increase our understanding of assessment and its role in supporting student learning – something we all care about.

One of the greatest challenges in developing adequate assessment is that the term is ambiguous. While some conceptions of the term correctly draw our ire, others should not. A simplified model of teaching can help us zero in on a handful of fruitful concepts of assessment. This model proposes four constitutive elements to teaching: (1) how we would like students to be different when they are done their time with us; (2) what we will do together to help students be different in this way; (3) how we will know if they are different in the way we want when they are done with their time with us; and (4) the context in which (1), (2), and (3) occur and are understood.

For example: I would like my students to (1) remember a handful of philosophic concepts. So, I will (2) deliver a lecture on these concepts (exciting, right?). Then, I will (3) ask them a couple of questions that require them to recite a few details of these concepts, and I will also observe whether they are taking notes and paying attention throughout the lecture. This model can be extended to all learning activities (a common name for (2) that take place in a moment, as part of a class period, the whole class period, multiple weeks of a course, a whole course, linked courses, whole programs, all the way to completed degrees. My focus here is the part of the model identified by (3) – how will we know if learners are different in the way we intend by teaching them – namely, assessment.

Assessment can serve many purposes. These purposes can be institutional, such as grading, or folk psychology, such as predicting student engagement in upcoming classes, but my focus here is student learning. When assessment is leveraged for the purpose of student learning, our assessments are aimed at improving learning activities – this is sometimes called "closing the loop." When we close the loop, assessment is aimed at improving our learning activities (tests, journal assignments, papers, group activities, lectures, reading lists, etc.), thus improving students’ opportunities to engage in meaningful growth. When one has the feeling that a particular activity needs to be tweaked to better target the learning in question, one is engaged in assessment.

Changing a learning activity in order to better target a particular skill, content knowledge, disposition, or anything else, is closing the loop. Loop closing is iterative; this is known as "continuous improvement" in teaching and learning circles. Understanding the purpose of assessment is critical to the decisions we make about it – it is difficult to see what could be upsetting about taking stock of our own role, and refining and improving it, for the purpose of increased student learning.

Assessment of student learning comes in many flavors. Here, I want to draw two distinctions: Direct vs. Indirect Assessment and Formal vs. Informal Assessment.

The direct versus indirect distinction has to do with the “distance” one is from the demonstration of learning (sometimes called the artifact). Consider a journal assignment wherein students reflect on their use of course content in their own lives. The journal they produce is the artifact. If we use this artifact to determine whether the student learned some skill (say, journal writing), we would be doing direct assessment – learning is assessed, directly, by way of the journal.

Now, if we were to ask the student whether they believed they learned the skill demonstrated in the journal assignment, we would be doing indirect assessment. Here, the "measured" learning is one step removed from the artifact (the demonstration of learning). While the philosopher in me worries about this distinction, it draws a helpful difference between, say, a survey of students’ perceptions of learning (indirect) and (direct) assessments of performance (artifacts). Current best practices in the assessment favor direct assessment.

The second distinction is formal versus informal assessment. Informal assessments involve “eye balling” student learning. If you have ever looked to see if your students are taking notes in class, you have done an informal assessment of student learning. In formal assessments, you develop the measurement of learning (usually a rubric) in advance of collecting the artifact. Unsurprisingly, it is formal assessment that finds its way into the scholarship. That said, both formal and informal assessments can be extremely helpful in the pursuit of continuous improvement.

Alright, kudos to you for reading this far! Edge of your seat, I bet. Permit me a few more observations. First, the artifacts that we develop for assessing student learning can also meet other goals: grading, for example. Two for the price of one! We can collect data that enables us to continuously improve learning activities while we accomplish other important teaching tasks.

Second, there are a number of psychological biases, including but not limited to confirmation bias, the self-serving attributional bias, and naïve realism, which can make assessment a real challenge. While bias is (probably) inevitable, it can be reduced, in the service of continuous improvement, by iterating assessment and working as a team (hopefully one designed to ensure diversity and inclusion). Additionally, methodological and theoretical pluralism– measuring in different ways, or with different assumptions, or different theoretical frameworks, can help too. That said, “eye-balling” student learning can still lead to great gains in terms of the continuous improvement of learning activities.

Assessment is a vital component of all good teaching. While I strongly believe that formal assessment, conducted by groups, using a variety of artifacts, is the key to programmatic assessment, I didn’t make that case here. Here, I only sought to suggest that you are probably in favor of assessment, under at least some descriptions. Cheers!

Written by John Koolage

"One of the greatest challenges in developing adequate assessment is that the term is ambiguous. While some conceptions of the term correctly draw our ire, others should not. A simplified model of teaching can help us zero in on a handful of fruitful concepts of assessment. This model proposes four constitutive elements to teaching: (1) how we would like students to be different when they are done their time with us; (2) what we will do together to help students be different in this way; (3) how we will know if they are different in the way we want when they are done with their time with us; and (4) the context in which (1), (2), and (3) occur and are understood."