# Rubrics

## Rubrics

**A rubric or scoring key is required for every SITP assessment, no matter what type of assessment it is.**

These tools make it easier for you to evaluate student work, ensuring that grading is fair and consistent. They also make your grading process transparent to students, so that they know what is expected of them before they begin the work, and/or why they got the grade they did. Finally, rubrics and scoring keys allow the SITP staff and classroom teachers to understand how students are being evaluated.

**What is a rubric?**

A rubric is a matrix that describes the expectations for an assignment by listing the criteria, or “what counts,” and describing levels of quality from excellent to poor.

**Rubric or scoring key or both?**

If your assessment requires students to provide answers that are either right or wrong (such as a multiple choice or short answer test), you need a scoring key that has the right answers on it.

If your assessment allows for a range of scores, you need a rubric that lists what is required to reach each level of scoring. For example, if students are being assessed on a final presentation, and your scoring scale is excellent, good, fair, or poor, your rubric will describe exactly what a student has to do for their presentation to be excellent, good, fair, or poor.

You may need __both__ a scoring key and a rubric if, for example, you are grading the students on a presentation AND on a test, or if your test includes a written answer or drawing that is worth more than one point.

**How to make a rubric**

Creating a rubric can seem overwhelming and confusing, but it doesn’t have to be! It simply requires some time and visualization.

**Step 1:** Decide what elements of the students’ work you want to evaluate. Your daily learning objectives are a good place to start: What have the students been learning and practicing throughout the week that they should be able to demonstrate in their final assessment? In addition, you should decide if you want to evaluate elements that are __not__ content-related; for example, spelling, grammar, or presentation skills.

Make a table with one row for each element, plus a header row at the top. Start with 5 columns in your table (allowing for maximum of 3 points per element); you can always add or delete as needed.

Let’s go back to the media example:

For this assessment, the museum educator might choose to evaluate the following:

**Step 2:** For each element, **decide what the best possible output would look like**. Again, refer to your learning objectives to remind yourself of what you are expecting students to know and be able to do.

**Step 3:** Work backwards from there to determine what lower levels of success would look like.

(The maximum number of points a student could get on this assignment is 12. To convert that into a percentage out of 100, simply divide the number of points received by the maximum possible points. For example, if a student scored a 10/12 on this assessment, their percentage would be 10 ÷ 12, or 83%.)

**--> NOTE: Each element of your rubric does not need to have the same maximum point value. **

Let’s say that one of the scoring criteria is that the media piece has the student’s name on it. Either the piece has their name (1 point) or it doesn’t (0 points). You can simply leave the boxes empty for the other point values (or type “N/A”, or fill them with a color). Alternatively, if you are including a criterion that is more complex than others, and success on that criterion can be broken into more than 4 levels, you can assign that one a higher point value (and leave those higher-point boxes empty for all the other criteria).

**Step 4**: Test it out! Use your rubric to grade a few assignments. Things to look for:

- Compare the work of all the students who earned the same score on a single criterion according to the rubric (e.g. all the students who earned 3 points on the “Photography” criterion above). Is their work actually comparable, or is there a range of quality within those samples? If there is a range, that might be a sign that you need to make that criterion worth more points, so that you can have more of a breakdown between scoring levels.

- As you score students’ work with the rubric, do you notice things that you want to award points for, or take away points for, that aren’t mentioned on the rubric? Adjust the rubric so that it reflects the work the students are actually producing (or adjust the assessment to elicit the work you intend for them to produce).

- Finally, as you test out your rubric, do you feel as though it’s accurately capturing, and appropriately rewarding, students’ achievement of the learning objectives that you set out to meet? If not, is it a matter of adjusting the rubric to evaluate students’ work differently? Or is it a matter of adjusting the assessment to better allow students to demonstrate their learning?