DRAFT LANGUAGE FOR PRACTICE GUIDELINES
GENERATIVE AI POLICY STARTER KIT FOR K-12
GENERATIVE AI POLICY STARTER KIT FOR K-12
Each section below provides suggested language that can be used or adapted for use in schools and districts. Some language is intentionally duplicated across sections to demonstrate different ways of using the same concept in a guidelines statement.
We extend our appreciation to our colleagues in Uxbridge and Newton, Massachusetts and the North Carolina Department of Instruction for their work to provide educators and students in their respective spheres of influence with clear guidelines and policies around the use of Artificial Intelligence (AI) in the classroom. Much of the language contained here is inspired by their work.
Learning to be a skilled and ethical user of genAI is part of the preparation that students need for the future, and a component of becoming a digitally literate citizen, as described in the commonwealth’s Digital Literacy and Computer Science Curriculum Framework. Therefore, the purpose of these guidelines is to support the wise and appropriate use of genAI in the context of teaching and learning to help strengthen students’ skill with technology, aid deep learning and creative thinking, and help develop students’ ability to ethically engage with the digital world. In addition, while we acknowledge that the use of Artificial Intelligence (AI) tools cannot be completely ethical or risk-free, this guidance is intended to minimize the risks and harms associated with the use of AI.
These guidelines provide a broad guiding framework for responsible, ethical, and creative use of genAI in our school district by teachers and students. It is not comprehensive, and does not anticipate or govern all possible uses of genAI by all people in the district.
Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. (source: IBM)
Generative AI (genAI) refers to machine learning models that are trained on vast data sets and are intended to create new, similar data. They can be used to generate text, images, code, video, audio and other forms of content. GenAI can come in the form of stand-alone tools such as ChatGPT, Claude and Magic School, or as a component part of another piece of software, such as the image generator in Canva or the “help me write” extension in Chrome.
In many ways, genAI is part of a longer progression of technological advancement; in other ways, it represents new and novel possibilities in our classrooms, jobs, and lives. In alignment with the GenAI: Critical Engagement Toolkit from the Collaborative for Educational Services, we see the following mindsets and abilities to be central to a person’s ability to wisely, effectively, and ethically use genAI:
Basic understanding of the way that genAI works, including its data sources
Data privacy, safety, and intellectual property implications
Limitations of the technology
Ways to consider genAI use using a social and emotional frame
Ways to use AI that are aligned with a person’s moral and ethical code
The newness of genAI, and the rapidity with which it is changing, means that adults and students alike are (and will be) in the process of learning about its uses, misuses, and best practices.
For teachers and paraprofessionals, this means that regular and ongoing professional development experiences are needed to help educators explore and understand educational uses of genAI, equity and justice considerations, data privacy concerns, and other relevant topics.
We understand genAI fluency as the constellation of skills, mindsets, and knowledge that a person needs if they are to use and/or engage critically with genAI.
Components of genAI fluency include:
Understanding of the basic underlying technical functions that make genAI work, and therefore what it can do well and not well
Recognition of a range of ethical domains that are relevant to genAI, including (but not limited to) the possibility of bias, data privacy and safety issues, and environmental concerns.
Awareness of information quality concerns such as unintentional AI “hallucinations” and the possibility of disinformation.
Ability to discern when genAI is an appropriate tool for a given task, and when it is not.
Ability to use genAI tools wisely, appropriately, and effectively through prompt engineering strategies.’
These components are addressed in the GenAI: Critical Engagement Toolkit
Learning to effectively and ethically use genAI tools is part of our school’s broader effort to ensure students develop the skills and mindsets necessary to prepare them for life after high school. As such, we are committed to provide learnings opportunities where:
Students will be given regular opportunities to engage with core concepts and practices related to genAI to help them gain fluency with genAI.
Students will have opportunities to learn about ethical issues that arise from genAI.
Students will learn how to critically evaluate information generated by or (or possibly generated by) genAI.
Our district recognizes that genAI, like other technology advances that have come before, provide pathways for individuals to engage in behaviors like cheating. Our guidance thus reflects our commitment to academic integrity and honesty.
Option 1 - Due to their unreliability and their established pattern of flagging the work of multilingual students and students who use scaffolded supports in their writing due to specific learning differences, the use of plagiarism detectors is not supported by our district.
Option 2 - Teachers may use genAI plagiarism detectors that have been vetted and approved for use by the district. AI plagiarism detectors are notorious for flagging the work of multilingual students and students who use scaffolded supports in their writing due to specific learning differences, so careful consideration must be applied before taking further steps.
If student work is flagged as plagiarism/cheating by a district-approved detector, teachers will:
Reflect on the student as a learner and consider how the submitted work reflects or deviates from patterns of past academic work.
Meet with the student to build understanding about how and why the student may have used genAI and identify if and how the student used AI tools when completing the assignment.
If after these considerations, the teacher determines that the student used AI in a way that is inconsistent with our district’s academic honesty policy/policy on cheating, have a conversation with the student about appropriate and ethical use of genAI.
The teacher will then use their professional judgment to determine next steps, in alignment with school and district discipline codes, academic integrity codes, and other policies and guidelines.
Teachers are encouraged to develop or augment their own classroom Academic Integrity Statements to include genAI. These Statements should be in line with district policy and guidelines, and the school handbook.
Teachers can use the following questions to guide the development of their classroom guidelines/statement:
What is AI/genAI?
When are AI tools allowed or not allowed?
What TYPES of tools are allowed?
What are the guidelines around citing genAI tools? MLA citation guidance can be found here.
Why are they allowed (or not allowed?)
What are the consequences for using AI/genAI tools outside of these classroom guidelines?
What are teachers’ own commitment to ethically and transparently using genAI?
Teachers should look for professional learning opportunities to be offered by the district throughout the school year.
There are many genAI tools available (and under development) designed for teachers to use in support of curriculum, planning and instruction. These tools should be vetted for use by District Tech staff and teachers will be provided with professional development opportunities to support their use.
Teachers are expected to use their professional expertise in reviewing all curriculum suggestions provided by genAI tools, and ensure that instruction aligns to district values, learning objectives and student needs.
As with all student data and personally identifiable information, teachers are responsible for protecting the privacy of their students when using genAI tools. Never share personally identifiable information (PII) with genAI tools.
Many publicly available genAI tools (e.g., ChatGPT) have age restrictions and/or do not meet our district’s requirements related to data privacy and safety. Use of genAI tools in the classroom should be restricted to those tools that the district has approved for school use. When teachers use genAI in the classroom, it will be to supplement (not supplant) instruction.
When educators use genAI for any reason, that use should be transparently disclosed to the relevant students, families, administrators, and colleagues. That disclosure should describe how and why the genAI tool was used.
Educators will follow all legal and data privacy guidelines to ensure that student data and privacy is not put at risk.
Students are expected to adhere to the district’s acceptable use policy (AUP) and all other relevant behavioral and academic integrity policies/guidelines when using genAI.
Large Language Models like ChatGPT are not search engines and these models generate content by making predictions based on their training data and the user input or prompt. They do not search for and return content that already exists as search engines do. Because of this, LLMs have the potential of generating (predicting) content that is not factually correct, but sounds very plausible. This phenomenon is commonly referred to as ‘hallucinations’. Students and teachers using generative AI in the classroom will verify information generated by AI tools, such as facts, quotes, statistics and other resources.
Students and teachers are encouraged to use genAI tools that provide clear citations and the ability to exclude sources that are not considered reliable.
Opportunities to develop critical media literacy skills should be a part of classroom instruction at all grade levels.
All district staff will adhere to data privacy policies and guidance provided in the district’s Acceptable Use Policy. District cybersecurity policies, procedures and guidelines as laid out in the Employee handbook AUP apply to the use of genAI tools and platforms.
Students will not be asked to create accounts on any AI platform that is not approved by the district.
In addition, note that:
All genAI tools endorsed by or used in the district will comply with relevant laws and guidelines related to user data, including student data. This includes, but is not limited to FERPA, COPPA, IDEA and PPRA.
Students and families will be informed of the extent and purpose of any data that is shared with genAI systems.
If you are using a genAI platform or genAI-enhanced tool, each interaction provides the tool with training data. The ways the information will be used is often not disclosed by the AI system, and thus we assume that information entered can be used and shared in unpredictable ways. Be cautious with entering any information about a student into an AI tool (even depersonalized data).
Response to the following incidents are guided by existing emergency/incident response plans and should be immediately reported to school administrators.
Data breaches or misuse of genAI tools.
GenAI-related harassment/bullying
Instances of genAI misuse (e.g., cheating) should be handled in line with existing codes of conduct. Because genAI is new and students may not have received clear guidance about how to use it ethically, teachers should:
Reflect on the student as a learner and consider how the submitted work reflects or deviates from patterns of past academic work.
Meet with the student to build understanding about how and why the student may have used genAI and identify if and how the student used AI tools when completing the assignment.
If after these considerations, the teacher determines that the student used AI in a way that is inconsistent with our district’s academic honesty policy/policy on cheating, have a conversation with the student about appropriate and ethical use of genAI.
The teacher will then use their professional judgment to determine next steps, in alignment with school and district discipline codes, academic integrity codes, and other policies and guidelines.
Generative AI models are largely trained on information found on the English-speaking and Western culture-focused Internet. As such, there is potential for the model to display inherent biases, including around race, gender, religion, culture, language and politics.
Our goal is to ensure that genAI is used in a way that is equitable and accessible to all students and is aligned with our district’s values around equity, justice and inclusion. As such, it is critically important that our use of genAI (and AI tools more generally) does not cause harm to our students or teachers.
Analytical techniques that address and mitigate implicit bias should be included in all district AI Literacy training materials for educators and students. This should include (but is not limited to):
how to write AI prompts that push beyond stereotypes;
when and how to report biased content to the genAI platform;
when and how to report biased content to the district.
As a district, we commit to proactively engaging in conversation around equity and bias, and holding ourselves accountable for addressing bias and harm that results from our use of these tools.
Option 1 - The district encourages the professional use of AI for formative and summative assessments to improve the efficiency of grading and feedback, with the following caveats:
Be aware that grading with generative AI tools can be unreliable due to inaccuracies (or ‘hallucinations’) and implicit bias in generative AI tools.
Teachers are responsible for ensuring that AI-supported assessments align with curriculum standards and maintain the academic rigor and fairness of the grading process.
Students and caregivers must be informed when AI is used in the assessment of work and have the option to request human grading for any assignment or assessment.
Option 2 - At this time, the district does not allow for the use of AI for formative or summative assessments. In the event this changes, these guidelines will be updated with new information.
This policy will be regularly revisited and evaluated to ensure that it remains relevant and up-to-date with the changing technology landscape.