The biggest push back received in pitching this idea was how I would do A&E: 'Markless' sounds a lot like 'Not Marking'.
As a teacher, I have to do A&E. It is a mandated part of my craft. Twice a year I MUST produce a report which demonstrates how a student is progressing which shows the progression as a percentage. That MARK must reflect student learning. A teacher cannot simply make it up.
I often do not record the A of A&E. I believe it is the 'practice' before the 'game'. I encourage my students to try assessments to their fullest knowing that if they do not succeed it will not be held against them. Things like quizzes I make professional calls on: If a quiz matches my expectation of learning I will often record it since it is easy. If many students struggle there is little point in collecting it since they need more time to master the subject. It then becomes a learning tool for them in better understanding what is expected of them and the form that expectation will take. Simple conversation is another form of assessment done and I'm, honestly, much too lazy to write down the result of every conversation I have with students. I will note it when the conversation is very different from what I've otherwise assessed or evaluated. I only regularly record evaluations.
How would I record my data?
Thanks to a PD on Applied Learners I had a brief breakdown of the specific curriculum expectations for SNC1P, which I modified. I then followed Amy Lin's lead in using the expectations to record student achievement.
Rather than using the expectations to develop an aggregate evaluation (like a test) I kept each expectation separate. Other teachers can say "This student got X% on the biology test" I say "This student successfully met X number of SNC1P expectations on their biology test."
Notice that I still did run-of-the-mill evaluations. I simply formatted differently. I ended up figuring out which expectations matched each question. On assignments my rubrics focused on the expectations rather than my interpretation of the expectations.
I used these above organizer to track achievement. Each student had their own booklet with each strand separated on a different page. It was my ‘markbook form’. How I recorded the marks changed over time. At first I copied my comments from the rubric straight to the tracker, but that took a huge amount of time and very small writing. I settled on just recording the main idea (good, ok, ‘did well on a, not so hot on b’). I also started colour coding evaluations, and using the same code across students. That way I could track who had completed what without needing a reference. I would assembly-style colour student’s booklets once an evaluation was completed.
What about KICA?
I faced this when approaching midterms and again when facing the final reporting period. The expectations are not broken down into Knowledge, Thinking (Inquiry), Communication, and Application (KICA).
I midterms I sought the advice of an administrator who agreed that I had a problem. Our work-around was to randomly assign each expectation to a KICA category.
It was a good short-term work-around.
There were problems at midterms. First, there are many expectations. Entering a grade for each expectation was time-consuming and worked against the entire ethos of the plan. With my adoption of a randomized model I simply selected the next KICA category for the next expectation. Expectation 1.1 became K, 1.2 became I, and 1.5 became K again. This was very arbitrary and led to some surprising results.
I relied on my professional judgement to fix those surprises.
For Finals I made the professional decision to do away with KICA. I treated each strand as an equal (again, the logic behind that is up for debate) and determined a strand 'mark' using my template. After deleting my midterm inputs, I inputted each strand into Markbook, being careful to ensure they were all in the same category to give equal weighting. I then added the Performance Tasks and Exams and I was done.
The marks matched my experience with the students.
Nuts and Bolts
One of the goals of this experiment was to provide meaningful feedback to students to encourage academic growth. How did I provide that, philosophy and behind the scenes methods aside?
My primary source was Brookhart's How to Give Effective Feedback. In Brookhart's book she points out that too much feedback can have a negative effect on student learning. Just like our evaluations, we need to tailor feedback to our students.
For my students that meant that I adopted a largely holistic approach to feedback. I would assess as normal, but rather than spending time on detail I would consider the one thing they could do that would make the most impact the next time they tried something similar. I also pointed out what they had found success on to reinforce that they were capable.
Assessing like that zeroed in on student need. It was also an exercise in perspective.
I once had an SNC2P class. I spent a week and a bit supporting my students on ray diagrams, which is ONE expectation in the entire course. Out of the dozens of expectations in the course I spent critical time helping students learn how to do something that is a small part of determining how successful they are in the course.
That last statement can be controversial. The Ministry tells teachers broadly what we should teach. It is up to each teacher to decide the balance of what is important or not. For my 'markless' class I determined each specific expectation addressed (since all need not be addressed) would be equal in value. I'm not sure if that is the right decision. That is a debate for another time, 'markless' or not.
I would evaluate by expectation.