Image created by Ashley McCormack using Flash 2.5 in Google Gemini.
As we integrate generative AI into our district, it’s important to remember that these tools are powerful assistants, but they are not substitutes for the professional judgment and insight of an educator. While AI can certainly help streamline tasks like generating rubric drafts or summarizing feedback points, it should never be the sole authority in grading.
At WS/FCS, we know that effective assessment is central to our Vision for Teaching, and it involves more than just a score. Your role is essential in guiding the assessment process, which is designed to:
Capture evidence of learning and provide feedback.
Inform instructional design and monitor student growth and progress.
Be intended for both the educator and the learner.
Empower students to self-assess, reflect on progress, and set goals.
Integrate formative and summative assessments, including performance tasks that require writing as a way to make learner thinking visible.
We trust your professional expertise to guide the use of AI and keep the focus where it belongs: on the student.
Academic Integrity and AI Usage
Feel free to use the following statement in your class syllabus or on your class Canvas page.
"Unless otherwise stated for a specific assignment, all work submitted must be your own original thought. Any use of Generative AI must first be approved by the teacher and cited to maintain academic integrity."
Our focus in WS/FCS is on instructional design and fostering a culture of academic integrity, rather than relying on punitive technology. We encourage educators to be cautious and generally avoid the use of third-party AI detection software for grading and disciplinary decisions.
Here’s a look at the challenges associated with these tools:
Risk of Inaccuracy: These detectors can frequently flag completely original, human-written work as AI-generated (false positives). This can unintentionally damage the essential trust between a student and teacher and could lead to unfair consequences.
Easy to Circumvent: Students who are determined to misuse AI can often make simple changes to the output, effectively bypassing the detector entirely. This makes the technology unreliable for catching true academic dishonesty.
Opportunity Cost: Time spent trying to "police" AI usage is time we could better spend on redesigning assignments. We want to create assessments that inherently require critical thinking, personal reflection, and application—tasks that make AI submission impossible.
Fairness and Equity: Detection software may introduce unintended bias, potentially flagging the work of multi-lingual learners (MLs) or students with diverse writing styles more often, which creates an unfair assessment environment.