You should be aware of the many policies that may impact your teaching.
What national policies exist related to AI?
What state policies exist related to AI?
What district policies exist related to AI?
What policies related to AI do you want for your classroom?
Bear in mind, these policies are not only student-centric. Some policies affect teachers/employees.
Does your district have policies about how you're allowed to use AI?
What limits are there with what you can share with or how you can use AI?
FERPA - Family Educational Rights and Privacy Act. Schools safeguard PII within education records. Parents can request access, modification, and removal of data.
CIPA - Childrens Information Privacy Act. Schools that receive e-rate funding must monitor & filter student internet usage.
COPPA - Children's Online Privacy Protection Act. Limits the collection of PII on children under 13 yrs old unless the data collected is for educational use (like learning platforms). Cannot use data for targetted advertising. Profiling is a term commonly used here.
OSIPA - Oregon Student Information Privacy Act. Stricter version of COPPA. Raises the profiling/collecting data age to 18.
Major things to consider:
What does AI do with my (or my student's) data?
If AI stores my (or my student's) data, is it used to train its models?
What prevents AI from making your PII accessible through prompt engineering?
Are businesses complying with Oregon's stricter AI requirements?
Advancing Artificial Intelligence Education for American Youth (2025)
Executive Order by Donald Trump to emphasize the importance of teaching AI literacy to the rising generation.
Policy Goal: The U.S. policy aims to promote AI literacy and proficiency by integrating AI into education, providing training for educators, and fostering early exposure to AI concepts to build an AI-ready workforce.
Presidential AI Challenge: Plan a challenge to highlight student and educator achievements in AI and encourage collaboration.
Improving K-12 Education: Creating public-private partnerships to develop online resources for K-12 students and using grant funds to assist states and local efforts.
Enhancing Educator Training: Prioritize the use of AI in grant programs for teacher training, including professional development for integrating AI into all subjects.
Promoting Apprenticeships: Increase participation in AI-related Registered Apprenticeships and support the use of Workforce Innovation and Opportunity Act (WIOA) funds for youth to develop AI skills.
High School Opportunities: Support the creation of AI courses and certification programs for high school students, including dual enrollment opportunities.
Artificial Intelligence and the Future of Teaching and Learning (2023) - 71 pp
Developed to address the need for knowledge sharing and policy development as AI becomes more prevalent in educational technology.
Emphasizes that AI should support and augment, not replace, human educators. Keep "humans in the loop".
Calls for policies that:
center people (parents, educators, and students)
advance equity
ensure safety and effectiveness
promote transparency.
Highlights the potential of AI to improve teaching by reducing administrative burdens and providing more time for direct student interaction.
AI can enhance formative assessment by providing real-time, adaptive feedback and measuring complex skills that are not easily assessed otherwise.
A key recommendation is to focus research and development (R&D) on addressing context and learner variability, rather than just "teaching to the middle" (Differentiated Instruction/Assignments).
The report stresses the importance of involving educators and students as co-designers of AI technologies.
It also warns about potential risks, such as algorithmic bias that could lead to discrimination, new data privacy concerns, and unintended consequences like widening achievement gaps if AI is not implemented carefully.
Seven recommendations for policy action:
emphasizing "humans in the loop"
aligning AI models to educational visions
designing with modern learning principles
strengthening trust
informing and involving educators
focusing R&D on context
developing education-specific guidelines and guardrails.
Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration (2024) - 79 pp
Mitigating Risk: This section focuses on safeguarding student privacy, security, and non-discrimination. It covers opportunities and risks associated with AI, federal laws like FERPA and COPPA, and the importance of civil rights, accessibility, and digital equity.
Building a Strategy: This part helps leaders create a strategy for integrating AI into the instructional core. It emphasizes understanding evidence for AI tools, considering the instructional core, and planning a strategy that involves listening to the community and setting a pace for adoption.
Maximizing Opportunity: The final section provides guidance on the effective use and evaluation of AI. It includes building AI literacy for educators, updating policies like the Responsible Use of Technology Policy (RUP), and creating an ambitious, organization-wide action plan.
Related: A Call to Action for Closing the Digital Access, Design, and Use Divides (2024) - 113pp
Avoiding the discriminatory use of artificial intelligence (2024) - 16pp
21 case studies outlining discriminatory use of AI based on race/color/nation origins, sex, and disabilities.
Purpose: Assists schools in using AI in a non-discriminatory manner, consistent with federal civil rights laws.
Federal Civil Rights Laws: The Office for Civil Rights (OCR) enforces several laws that prohibit discrimination, including:
Title VI of the Civil Rights Act of 1964: Prohibits discrimination based on race, color, or national origin.
Title IX of the Education Amendments of 1972: Prohibits discrimination based on sex.
Section 504 of the Rehabilitation Act of 1973: Prohibits discrimination based on disability.
Discriminatory Use of AI: Illustrative examples of how AI use could lead to discrimination in areas such as:
Race, Color, or National Origin: Examples include AI-based plagiarism checkers with a high error rate for non-native English speakers and facial recognition technology that misidentifies students of color.
Sex: Examples include AI scheduling software that places female students in fewer computer science classes and deepfake harassment.
Disability: Examples include AI test proctoring software that flags a student's vision-related eye movements as suspicious and generative AI used to create identical 504 Plans for students with different needs.
Discrimination is not always intentional: The document highlights that discrimination can occur even if the AI tool itself does not use protected characteristics as direct inputs, but instead relies on historically biased data.
Generative Artificial Intelligence (AI) in K-12 Classrooms v2 (2025) - 16pp
Equity Implications: Outlines several equity implications of genAI, including bias, inaccuracy (AI "hallucinations"), plagiarism, copyright/licensing unknowns, and equity of access. Offers strategies to address these issues, such as training educators and students to understand potential biases and inaccuracies.
Data Privacy: Stresses the importance of protecting student and staff privacy, referencing FERPA, CIPA, COPPA, and OSIPA. Warns against oversharing PII with genAI tools, as this information can be stored and used to train the models.
Potential for Learning: Highlights the potential for genAI in classrooms, including its use as a teaching tool, for learning design, instructional support, and as a virtual assistant. Also notes that genAI can support multilingual learners and students with disabilities by providing real-time translations and simplifying complex texts.
Policy Considerations: Advises school districts to develop or revise policies to address the use of genAI. This includes the risks of synthetic media and deepfakes, which can be used for harassment, bullying, and the spread of misinformation. Recommends specific actions, such as training staff to identify synthetic media and incorporating these risks into existing policies on bullying and harassment.
Developing Policy and Protocols for the use of GenAI in K-12 Classrooms (2025) - 4pp guidance/instruction + 12pp worksheet
Purpose: Provides a framework for school districts to develop policies and guidance that promote safe, ethical, and effective use of AI. Intended to be used with the "Generative Artificial Intelligence (AI) in K-12 Classrooms" guidance.
Policy Landscape: Districts should be aware of federal and state policies which have implications for AI use, such as:
OSIPA - the Oregon Student Information Protection Act
CIPA - Children's Internet Protection Act
COPPA - Children's Online Privacy Protection Act
FERPA - Family Educational Rights Privacy Act
U.S. Copyright Office policy on AI-generated material (More Info Here)
Digital Divides: The National Educational Technology Plan (NETP) is framed around three "Digital Divides" in education. These are the Digital Use Divide (passive vs. creative use of technology) , the Digital Design Divide (teacher professional learning) , and the Digital Access Divide (access to devices and connectivity). The document uses these divides to frame equity considerations for AI use.
Guidance vs. Policy: Helps districts determine whether to create a new policy or simply issue guidance that clarifies how existing policies, such as acceptable use, data privacy, and student codes of conduct, apply to AI.
Ethical Use: Ethical use is a key focus, addressing academic integrity concerns like plagiarism and defining how students and staff should disclose AI use. It also addresses digital citizenship, including the risks of synthetic media and deepfakes.
Human-Centered Approach: The toolkit recommends a human-centered approach, such as the "80/20 principle," where AI tools do 80% of a task and a human user revises and improves the output for the remaining 20%.
Policy Goals: Provides a framework for districts to develop policies based on several goals, including valuing and integrating community input, aligning AI use with the district's mission, focusing on student and staff safety, advancing equity, developing AI literacy for all stakeholders, establishing parameters for AI in assessments, and providing mechanisms for ongoing monitoring and evaluation of AI's impact.
State Policies/Guidelines (Not Oregon)
California - AI 4 CA - AI: Learning With AI; Learning About AI (Document)
California - Learning With AI; Learning About AI (Webpage)
North Dakota - North Dakota K-12 AI Guidance Framework
QAM: ADM-A016-STAFF USE OF ARTIFICIAL INTELLIGENCE
AI is a valuable tool
Ethical Issues & Risks
If used in the classroom with students, teachers should provide clear instruction, expectations, and guidelines/limitations (and these should be posted somewhere - i.e. syllabus)
QAM: CUR-A003-ACADEMIC INTEGRITY AND USE OF ARTIFICIAL INTELLIGENCE
Passing off someone else's work as your own (including AI generated material) is dishonest/cheating & undermines student learning.
AI can be used by students appropriately (without being considered cheating)
Students should know ethical & safety concerns
Staff are responsible for teaching this if AI is used in the classroom by students
The Salem-Keizer articles linked above were pulled on 1/17/2025 and may have been updated since. More recent, publicly posted QAM documents can be found at Salem-Keizer Public Schools - Policies, Procedures & Forms
District News: Google Gemini Now Available for Staff!
The Salem-Keizer School District was unable to get OpenAI to sign a student privacy agreement. This is important because it means that ChatGPT will save and use anything we share with it for anything it wants to use it for (training its AI models, for example). Microsoft Copilot is built on ChatGPT, meaning that we can't get an agreement with Microsoft either. They do have their own Privacy Policy, but it does not protect the data of our high school students to the degree that OSIPA requires.
For this reason, staff should:
Not encourage students to use ChatGPT/Copilot (you can't prevent them from using it, but you should not encourage/suggest its use).
Be cautious when using ChatGPT/Copilot yourself, specifically of what you share with ChatGPT/Copilot
Don't share student PII (Personally Identifiable Information)
Don't share staff PII (you don't have their permission)
Once you provide info to any website that hasn't agreed to our data privacy agreement, you lose control on what is done with that info.
"Couldn't I encourage students to use ChatGPT without logging in? Then at least it won't know who the student is." - Won't it? Do you think that user account info is the only way for AI to connect the dots of your identity?
"Does it really matter? Who cares if the AI company has that info on students?" - A better question is what can be done with that information and who else can get access? Our prompts help to train the AI, so if you or a student give up that information, it's now potentially accessible by others.
"Does a little data collection really make that big of a deal?" - in 2012 Target identified 25 products that could determine a pregnancy score and estimate a due date for a person and send out targeted ads, all because of data gained from individual user's purchase history ("How Target Figured Out a Teen Girl Was Pregnant Before Her Father Did"). How much more can AI learn about us simply from our prompts?
Salem-Keizer SD was able to get Google to sign our district's student privacy agreement. Staff can encourage students to use Gemini, however here are some things to consider:
Staff & Students should sign in to their salkeiz.k12.or.us or salemkeizer.org accounts when using these tools
If not signed in, Google is not bound by the district's data privacy agreement.
If not signed in, Google can store & use anything we provide it in prompts.
If not signed in, Google can still puzzle together information about us and use that to further train its models.
If/when using a personal account (instead of a school account) or using Gemini anonymously, consciously consider whether your prompts are disclosing information about students or colleagues.
Be aware that on personal accounts, you can delete chats (but that doesn't mean Google has erased the data!)
Be aware that on school accounts, you cannot delete chats
Other AI Models
The Salem-Keizer SD provides a list of approved Software and methods of finding them. I have not explored the full list of what has/hasn't been approved.