Skills-Based Grading

Materials:
Building blocks of an SBG course

Richard Stockwell and I, with statistical consultation from Douglas Ezra Morrison, show how semantics instructors can implement skills-based grading, a modern evaluation system shown to lower student stress, offer more equitable evaluation, and provide students with a lasting understanding of the material. We provide quantitative grade data and qualitative responses from students who were evaluated using skills-based grading in our semantics course.

Grading systems that focus on positive feedback have the ability to lower student stress and affect how they approach the material. With skills-based grading (SBG), a more equitable alternative to traditional grading, students can only affect their grade positively - no missed assignment or incorrect answer can permanently damage their grade. In the summer of 2020, we implemented the first instance of SBG in a semantics class, inspired by Zuraw, et al.’s (2019) implementation for phonology. Our methodology and results, including grade progression data and student survey responses, were recently presented in January 2021 at the Annual Meeting of the Linguistic Society of America, where it won the Student Abstract Award, and in May 2021 in an invited talk at the first Semantics and Linguistic Theory (SALT) Workshop on Inclusive Teaching in Semantics.

The SBG evaluation system requires that students demonstrate complete mastery over each “skill” taught in a course, but unlike traditional grading, SBG does not require that mastery be gained along a uniform timeline, nor does it penalize incorrect attempts at demonstrating a skill. Rather, the students are given a list of skills that they should aim to master by the end of the course. Every exercise included in the graded materials—quizzes, assignments, and tests—is an opportunity to demonstrate one or more skills, which are clearly marked on each exercise. Each attempt at a skill is graded as “not yet proficient,” “approaching proficiency,” or “proficient,” only the last of which adds to their final grade. Until the final exam, there is always another chance to demonstrate each skill.

Due to the regular feedback inherent to SBG, students are more aware of their progress, as well as the objectives they should be pursuing (Buckmiller, et al. 2017). This awareness helps them focus on the skills that they still need to master, instead of repeating skills they have already mastered. Having multiple opportunities to demonstrate each skill incentivizes re-attempting skills that were initially missed and lowers the stress, fear, and time pressure of any given attempt (Buckmiller et al. 2017), which benefits all students, but especially those dealing with stressors outside of their academic life. Because SBG does not require that students become proficient in the course material along a strict timeline, it does not penalize students for missed assignments or absences and affords second chances to students who encounter setbacks throughout the term (for instance, missing assignments due to medical, personal, or familial issues). The system is therefore fairer to students facing structural and institutional disadvantages, such as first generation students, transfer students, students with disabilities, and minority groups, who may not have as strong of a background in the prerequisite material and may be more likely to experience external stressors that lead to temporary setbacks.

Our application of SBG to semantics is noteworthy, as it includes skills that are abstract, if not outright philosophical, in addition to algorithmic, logic-based skills like those one might find in an introductory phonology course. Semantics courses often teach abstract topics like possible worlds, vagueness, and whether it is possible to adequately define a content word, not to mention introductory pragmatics, where student’s personal idiolects and life experiences create drastic differences in calculation of implicatures. The design of our course breaks each of these abstract and subjective tasks into gradable skills, like ‘explain how a definition is too strict/lax,’ and ‘identify an implicature and the associated maxim,’ which allow for a variety of responses, while remaining compatible with grading for proficiency (O’Leary & Stockwell 2021). We additionally evaluated cognitive skills such as forming hypotheses, finding supporting evidence, and providing clear reasoning. These skills were clearly labeled on high-level exercises, to help students become more aware of the steps necessary for practicing scientific reasoning and presenting academic argumentation. This awareness can be beneficially carried forward to advanced courses.



Questions? Feel free to email me at mauraoleary@g.ucla.edu.