After identifying outcomes and defining levels of proficiency, a system must establish what evidence will look like in the new assessment, grading and reporting structure. "For grades to be honest, accurate, meaningful, and fair, they must be based on reliable and valid assessment evidence. You can't have a quality grading and reporting system without high-quality assessments" (Guskey, 2020, p. 182). Assessment, in this context, is defined as the opportunity to provide evidence on the outcomes and determine the learner's current level of proficiency.
The key questions include:
What will assessment look like in this new system?
What types of assessment or evidence will be “counted” when thinking about summarizing and reporting out progress?
In a traditionally graded system, educators give assignments, learners complete them and then receive points or a percentage on that assignment. Anything in this system can count towards a grade, including homework, quizzes, classroom, tests, projects, etc. While these same tools might be used in Standards- and Competency-Based systems, they function differently. Ultimately, those scores or opportunities for evidence are just practice, if they are scored at all. The key is that learners are given the opportunity to practice the expected outcomes, receive feedback to support their development and then are summatively assessed when they are ready to demonstrate proficiency.
This distinction between formative and summative assessment is something educators have been doing for a long time. However, in a Competency-Based approach, formative assessment becomes the focus of learning, even though summative assessment is the only thing "counting" in the gradebook. In order for both the learner and educator to know where the learner stands in their proficiency, formative assessment is used to provide targeted feedback and support learner growth. It’s important to emphasize that formative assessment becomes even more important in this world. All learning and work that learners do should be formative where they are receiving feedback that supports them in reaching proficiency on the learning outcomes.
At Roseville City Schools, educators have spent significant time defining the distinction between summative and formative assessments. They only track formative assessment as another set of data to refer back to when supporting a learner. The only scores that ultimately make their way, in a mathematical approach to summarize progress on the Report Card, are marked by educators as summative scores. This is also the approach taken at Embark Education and is advocated for by Joe Feldman in his book "Grading for Equity" (Feldman, 2019).
There is discussion in the field to move away from the traditional distinctions between formative and summative assessments, as summative assessments are often thought of as the end test at the end of the unit. In a Competency-Based approach, this is far from the truth. Summative assessments, just like formative ones, can include observation, conferencing, performance assessments and more. The key is asking whether learners are ready to demonstrate proficiency or not.
If learners have not been given the opportunity to practice a new skill, we do not want to punish them by “counting” scores such as early quizzes or homework. However, we also don’t want to require learners to practice something they are already proficient in. If learners are demonstrating a level of proficiency during what has been deemed as “practice” or “formative assessments,” the educator in most standards- and Competency-Based approaches have the discretion to use professional judgment and deem evidence of learning more summative. Damian Cooper, in his book Redefining Fair, defined professional judgment as “decisions made by educators, in light of experience, and with reference to shared public standards and established policies and guidelines” (Cooper, 2011, p.3).
In a Competency-Based Assessment, Grading and Reporting structure, the assessment is the opportunity for demonstration and evidence. Learners demonstrate their proficiency on learning outcomes, either in a pre-designed assessment from the educator or in a more emergent way, such as an authentic application in an internship or in a conversation. The word assessment becomes flexible and no longer just means a multiple choice test or a quiz.
Some systems choose to create shared assessments across a school or district so there is consistency in how outcomes are assessed. You can see examples of shared assessments from Red Bridge and Roseville City Schools. Other systems anchor on their shared expectations created by the proficiency levels and rubrics, from which educators design assessments. These include project-based learning experiences and other authentic performance assessments that provide learners opportunities to demonstrate their level of proficiency on given learning outcomes. This is more common in a competency-based approach because of their interdisciplinary and flexible nature of their outcomes. It’s more difficult to have a shared assessment for a competency like Set Goals than it is for a standard in math. This becomes an important mindset shift, especially in a competency-based approach, to thinking about assessment as an opportunity to demonstrate and not a gotcha moment where learners need to prove something to a teacher.
Assessments can still look like traditional quizzes or tests, but they can also look like observation of a collaborative moment among a small team, a 1:1 conversation with a learner, a written piece of work, a video or even a spreadsheet. This is also aligned with the principles of Universal Design for Learning (2023) because the focus is on demonstrating proficiency of the learning outcome, not in what the assessment looks like. If the intended learning outcome is about reflecting on the process of their learning , students can do that in audio, 1:1 conversation, written or artistic reflection. This flexibility allows for all learners to equitably demonstrate evidence of their proficiency level without holding them back based on barriers of the assessment itself. As long as learning outcomes and proficiency levels (could be called rubrics) are clearly defined, the assessments can be flexible opportunities for learners to demonstrate proficiency on the learning outcomes.
Whether taking an approach of designing shared assessments or not, it’s important for systems to work with educators on what valid assessments look like and do tuning protocols to support educators with backward design to ensure assessments accurately and validly assess the intended learning outcomes.
What counts as evidence is an important discussion to have. When looking at evidence to determine a final score, what counts as evidence? International Big Picture Learning Credential (IBPLC), guides educators to use 3 sources of evidence with approximately 8 data points per source of evidence when providing a proficiency level assessment or score on an outcome. This is a high bar for evidence. A quiz or even a simple essay assignment would likely not meet this criteria. Instead, educators look at larger bodies of work, such as student’s capstone projects and senior thesis papers. Other ways of defining evidence that counts is stating what summative assessments mean in a system and that only those will be considered for a final assessment of a learner’s proficiency.
This discussion is a key component in designing a Competency-Based assessment, grading and reporting structure so educators and learners are clear about what type of evidence will be sufficient in determining levels of proficiency. In general, these systems are not looking at practice work and are expanding the definition of assessment to include a wide range of possible pieces of evidence, including self-reflection and observation.