FAQs

Q: Why are these assessment changes happening?

Many of the major changes to our assessment practices occurred a number a years ago - specifically the use of achievement levels for academics and separating behaviours and achievement on progress reports. These changes were in response to Saskatchewan curriculum changes, where the renewed curricula were written using outcomes and indicators. The changes that are being implemented for the 2022-23 school year are all about increasing clarity.

Below are the changes to assessment that are new this year, and the rationale for the change:

  • Using terms for academic achievement levels instead of numbers

      • Numbers created the impression that academic achievement could be converted to a percentage grade (ex: level 3 of 4 = 75%) - this was not accurate, so the terms are more clear as to what is being communicated.

      • In addition, levels now have more complete descriptions attached to them, which should help with clarity.

  • Differentiated learning behaviours for elementary and high school

      • Learning behaviours had previously been associated with the Broad Areas of Learning from the curriculum. These characteristics continue to be valued, but now those behaviours are defined to increase the clarity of what is being communicated.

      • Behaviours have been written in a way so that they are observable characteristics.

  • Using frequency terms for learning behaviour scores

      • Since the desired learning behaviours reported on are now all observable characteristics, the terms used to report on the learning behaviours are frequency based - meaning how often the characteristics are observed.

  • Streamlining progress reports

      • The digital tools available to educators has made more frequent communication with parents and families much more prevalent. As such, the progress report has been altered to return to its original purpose: to be a snapshot in time of progress to date. By moving more complete information to other communication avenues, the progress report can continue to be a more efficient method of communicating progress.

Aside from those changes above, there are many items that are staying the same:

  • High school students continue to receive percentage grades on progress reports.

  • High school students continue to receive learning behaviour scores, attendance, and comments for each course on progress reports.

  • Elementary students continue to receive learning behaviour and attendance scores by homeroom. A general comment is also provided by homeroom.

  • High school students continue to receive four progress reports throughout the year, while elementary students continue to receive three progress reports throughout the year.

Q: Why are progress reports changing?

There are two main reasons report cards are changing - partly out of necessity and partly to respond to advancements in communication techniques available to schools.

First, progress reports are generated by the software that is used in PSSD. All school divisions in the province have moved to a new Student Data System, and because of this the previous software that PSSD used to communicate with parents (PowerSchool) is no longer available to us. PSSD is using Edsby to communicate with parents and generate progress reports. Since this is new software for us, the progress reports will change.

Second, there is a recognition in recent years that progress reports have tried to communicate a lot of information that is now available to parents and families through other communication methods; this might include emails, digital portfolios, written notes and communicators, and, in the future, Edsby. With all these other options available to communicate more regularly, the progress report is being streamlined to fill the need of being a snapshot in time of progress to date. Should parents see something on the progress report they would like to know more about, there are now more ways than ever to get that information, and will continue to be more methods in the future.

Q: How are assessments related to outcomes?

Starting in 2010, curriculum renewal was undertaken in Saskatchewan. All the renewed curricula now are written with outcomes and indicators - statements that list what a student should know, understand, and be able to do by the end of the course. With this restructuring of the curriculum, reporting changed to follow suit. Now instruction is planned in relation to outcomes, student learning is measured against the outcomes of courses, and assessment matches this format. By assessing in relation to outcomes, there is coherence between instruction and assessment.

Q: I really like percentage grades - why can't we just use those?

Many parents grew up in an educational system where academic achievement was reported using percentage grades. Because of this, many of us have a level of comfort and familiarity with percentage grades. There may be some familiarity with proficiency-based scoring systems that are pass/fail (driver's license exam is a good example), or that are pass/fail but have several different levels of achievement (swimming lessons might be an example of this). But few will have gone through grades where academic achievement was communicated using different levels of how well a student met the outcomes. Because of this lack of familiarity, it is normal to feel uncomfortable and wonder why a change is necessary. It is quite reasonable to ask, "what benefit does this change provide or what issue does it solve?"

The answer to that question is that by using different levels of achievement, the levels can be defined, whereas percentage grades include too many levels (101 levels in the 0-100% scale - and perhaps more if we include decimal points) to define them all with any meaning. To further understand this, it is important to understand the limitations that exist within the percentage system.

As just noted, the percentage scale includes a wide range of levels available to use for grading. If we are measuring situations that only have two possible outcomes, the percentage system works fine and can distinguish precisely between different performances. For example, if we were measuring the shooting accuracy of an athlete, then using a percentage scale is a suitable measurement scale. For shooting, the object either goes in or it does not - there are only two possible outcomes. With enough shot attempts, we can get a good idea of the shooting ability of an athlete that would help us determine how they might perform. In this way, shooting percentage is information that helps us make decisions (we might want the athlete with a shooting of 83% taking more shots than the athlete with a shooting percentage of 31%).

But the percentage scale is much less useful when measuring more complex attributes where we are required to make nuanced judgments about performance. In these situations, we are not deciding between only two possible outcomes. When we look at the curricular outcomes in Saskatchewan, these are written in such a way that there are multiple different ways to successfully meet the outcome or indicator. Teachers are routinely asked to make these types of nuanced determinations.

As an example, let us look at a grade 3 math outcome. Many people feel that math is very cut and dried and that percentages work well to report student success. The feeling is that if a student is asked to add and subtract, there is a right answer and a wrong answer, so reporting the amount correct using a percentage grade should suffice. But the curriculum asks teachers to assess more than simply if the answer is correct or not. The addition and subtraction outcome for grade 3 math is written below, as well as one of the indicators for this outcome:

Outcome:

Demonstrate understanding of addition of whole numbers with answers to 1000 and their corresponding subtractions (limited to 1, 2, and 3-digit numerals) including:

  • representing strategies for adding and subtracting concretely, pictorially, and symbolically

  • solving situational questions involving addition and subtraction

  • estimating using personal strategies for adding and subtracting

Indicator:

Create a situational question that involved either addition or subtraction and that has a given quantity as the solution.

To summarize what is asked above in the outcome:

  • students should be able to add and subtract numbers to 1000

  • be able show they can do this through using objects, drawing pictures, and using math symbols (ex: 82 - 43 = 39)

  • solve adding and subtracting questions that come from specific situations (such as word problems that have a context),

  • estimate the answers to addition and subtraction questions.

Furthermore, by looking at the indicator, the students should be able to create a question that involves either addition and subtraction and has to equal a given amount. For example, the question given to a student might say, "Create an addition or subtraction question (or it could include both!) that equals 17." There are many different ways that a student could answer that question, some more complex than others. We might see answers like:

  • ??? = 17

  • 37 - 20 = 17

  • 7 + 10 = 17

  • 47 - 7 - 23 = 17

  • 4 + 8 + 2 + 3 = 17

  • 85 - 80 + 9 + 3 = 17

Looking at those examples, some show more complex thinking than others. Teachers are asked to assess the level of understanding demonstrated by those answers. The student who answered "??? = 17" does not yet possess the ability to create a question; this student may be able to answer questions where two numbers are given and they are asked for the answer, but cannot create yet. The student who provides the answer of "7+10=17" does not show as detailed of an understanding of addition and subtraction as the student who created the "85 - 80 + 9 + 3 = 17" answer. But if we used the percentage system, both would be marked correct, and both would then receive a 100% for this question.

This is one of the benefits of having a system that uses levels of proficiency to share student progress. Using defined levels, we can share more accurately what students' levels of thinking are in relation to the outcomes. For a more detailed explanation of why we use defined levels to assess, please refer to the "Why We Assess This Way" page on this site.

While many of us have a certain comfort level with percentage grades, this tool no longer matches the assessment task teachers are asked to perform, which is why we assess using defined levels. Like any system that is unfamiliar it can take time to gain comfort - our hope is that this website can help explain why this system is in use and that through continued exposure comfort and familiarity levels will rise to those many experienced having gone through a percentage scale-based system.

Q: What does the term "triangulation" mean?

In assessment circles, triangulation is a term that describes three formats to collect student evidence: conversations, observations, and products.

Conversations are the things that teachers hear students say. This can be in side-by-side conversations with teachers, or it might be in small or large group settings.

Observations are the things that teachers see students do. Again, this can be done individually or as part of a large group.

Products are the things that learners create. These have typically been projects, tests, assignments, artifacts, etc. When we have historically discussed assessment, products are what many people think of.

Considering triangulated evidence is important for two main reasons:

  1. Some outcomes in the Saskatchewan curriculum are written in such a way that one (or more) of the evidence formats is required - therefore teachers want to match the type of evidence to what is required by the outcome.

  2. Most outcomes in the Saskatchewan curriculum do not require a specific type of evidence - therefore teachers often accept evidence in different formats, depending on the strengths the student possesses.

Aside from this, teachers want to collect enough evidence in a variety of formats to ensure that the grades they determine are valid. Collecting different types of evidence helps increase validity. Note though, this does not mean that a teacher should/must obtain a conversation, observation, and product from each student for each outcome.

Regardless of the format in which evidence is collected, teachers are looking to see how the evidence meets (or misses) what is asked for by the outcomes of the course. This is often done by defining success criteria for the outcome so that both students and teachers understand what successfully meeting the outcome looks like and sounds like, in various formats.

Below is a graphic that further highlights the components involved in triangulated evidence.

Q: What does the term "professional judgment" mean and how does it apply to assessment?

Regardless of the grading scheme that is used, teachers always use their expertise when determining how student evidence stacks up in relation to a learning goal - if this expertise is being used formatively (during the learning) it can be used to determine next steps, and if being used summatively (after the learning) this expertise is used to determine a level. Teachers train for many years to acquire this expertise and continue to deepen their expertise as they grow in their teaching craft. When making a professional judgment about evidence, teachers are looking at the evidence available and evaluating it in relation to criteria to determine where the criteria is met and where it is not.


In addition to this, when determining a grade for student evidence it does not always come down to a simple mathematical average. Professional judgment means that teachers may apply a number of different factors to analyze collected evidence. These may include:


  1. Depth of Evidence

    • Not all evidence is as complex as other evidence. If a student demonstrates full understanding of a very simple concept, this rarely indicates mastery of the whole outcome. Teachers check what level the collected evidence meets, regardless of the number of questions answered correctly or incorrectly, so that they can determine the level of thinking demonstrated – not necessarily the number of questions right or wrong.

  2. Triangulation

    • Teachers will collect evidence through the products students create, the conversations students engage in, and the observations that teachers make of students. Collected evidence should both meet the requirements of the outcome (the evidence for some outcomes is better met through conversation as opposed to products, for example), and should allow students to demonstrate the strengths they have (as appropriate). Teachers are constantly examining how different types of triangulated evidence may be appropriate to demonstrate meeting an outcome.

  3. Most Recent

    • We generally expect performance/evidence to improve as students progress through the year and improve their abilities. As such, students should not be penalized for early attempts when their later attempts indicate they have achieved a higher level of proficiency. The Saskatchewan curriculum includes outcomes that students should know and be able to do by the end of the course, so if later evidence is stronger than previous evidence, the previous evidence can be disregarded when more valid evidence becomes available later, so long as it is of the same or greater depth of evidence (see above in "Depth of Evidence").

  4. Most Often

    • Sometimes student performance does not proceed along a nice positive linear progression (a line trending up). Instead performance may be strong then perhaps dip off, which can be confusing for teachers when trying to determine a proficiency level/grade. In these cases, it may be useful to determine what level of proficiency the evidence has demonstrated most often. This can give a starting point for a level that can then be adjusted using some of the other considerations noted above for professional judgment.

Using the combination of the above four factors in conjunction with the criteria required for different outcomes is how teachers exercise professional judgment when determining grades.