Artificial intelligence is evolving...
We’re here to support you. Bring us your questions, wonderings, concerns. Click Here.
As staff submit questions, answers will be posted below. Check back often for updates.
Artificial intelligence is evolving...
We’re here to support you. Bring us your questions, wonderings, concerns. Click Here.
As staff submit questions, answers will be posted below. Check back often for updates.
What is AI Overview in my Google search results list?
Think of the AI Overview as the “spark notes” version to your Google searches.
Instead of clicking on five different websites to find an answer, the AI does the reading for you and writes a quick summary with links to the websites it used to write the summary.
How It Works
When you ask a question like, "Why is the sky blue?" or "How do I bake a cake without eggs?", the AI kicks into gear:
Gathers Info: It scans millions of websites in a split second.
Summarizes: It picks out the most important facts.
Explains: It writes a paragraph or a list that answers your question directly.
Links: It shows you "cards" or links to the websites it used,so that you can read more and check the facts.
Why Is It Useful?
Saves Time: You get the "spark notes" version of the internet immediately.
Simplifies Tough Topics: It can take a hard subject (like how a black hole works) and explain it in simple words.
Helps with Complex Questions: If you ask something specific, like "What's the best type of dog for a small apartment with a cat?", it can combine info from many sources to give you one helpful answer.
Double Check the Details
Even though the AI is really smart, it’s basically a prediction machine. It looks for patterns in words, which means it can make mistakes called “hallucinations” or get facts wrong. As with any source of information, always check the links provided below the AI Overview summary to make sure the information is coming from a website you trust, the facts are accurate, and cite your sources.
Is this just making it easier for students to cheat?
KUSD policy maintains that AI should complement, not replace, quality teaching and human interaction. In a traditional model where only the final product is graded, the incentive to bypass the work is high. However, lessons should treat AI as a "Generator" that is intentionally imperfect. When students are graded on their ability to find errors in a prompt or iterate on a geometric proof, "cheating" becomes impossible because the work is the critique. We are moving from a "one and done" submission to a "mastery through iteration" model.
Proactive Rules and AI Literacy
Rather than just focusing on "catching" misuse, the district implements proactive measures to foster academic integrity:
• Educator/Student Conferences: If a teacher suspects AI misuse, the primary check is a respectful, open conversation. Students may be asked to explain their writing process and defend their ideas to demonstrate their understanding.
• Human-in-the-Loop: A core principle in KUSD is that AI-generated content must not be used to make critical decisions or evaluations about student performance. Teachers must remain the ultimate arbiters of learning, personally reviewing and validating any AI-supported feedback or grading.
• Sound Professional Judgment: All employees are expected to exercise sound professional judgment when deciding how AI is used in the classroom, ensuring it aligns with district goals and ethical practices.
• Clear Expectations: Instructional staff must inform students of the specific expectations and rules for every assignment. In many cases, teachers may direct students that using generative AI is completely prohibited to protect the "productive struggle" of learning. See Using A.I. with Students - Levels of A.I. Use
• Prohibited "Offloading": Students are specifically prohibited from using AI to automatically summarize complex articles, as this "reads for them" and offloads the necessary thinking and critical examination required for the assignment.
• Transparency and Citation: Students are prohibited from making it appear as if they created content that was generated by AI. They must not copy AI text word-for-word and must provide appropriate attribution or acknowledgment when AI is used as a source.
• AI Literacy Education: KUSD is developing a scope and sequence to teach students about the limitations of AI, including its tendency to "hallucinate" (provide fake outputs) or offer biased information. Empowering students as critical consumers helps them understand why over-reliance on these tools can undermine their own skill development.
For more information on this topic read - Managing AI Cheating (blog post)
If students can use AI to write an essay or solve a problem, how do I know they are actually learning?
Learning is no longer evidenced by the final document alone. We are shifting our focus to the process. A student cannot effectively use an AI tool if they do not understand the underlying subject matter. To ensure students are not just "pasting" their work, educators can use digital tracking tools, such as Google Docs version history, or Class Companion, which tracks typing patterns and provides instant coaching. These tools help verify if a student wrote the text over time or if large sections were added instantly, and ensure students are engaging with the material.
What AI plagiarism detector can I use?
Relying on detection software is discouraged because these tools can be unreliable and produce false positives. KUSD emphasizes that final decisions on whether a student plagiarized or used AI inappropriately must always come from a human professional.
As an effective human-centered check, teachers are encouraged to collect baseline writing samples. This allows them to become familiar with each student’s unique voice, making it easier to identify significant, unexplained shifts in vocabulary or complexity during a human review.