This policy establishes clear expectations for the ethical and responsible use of Artificial Intelligence (AI) tools by students and staff at St Paul’s Collegiate. It is intended to uphold academic integrity, ensure authenticity in student work, and support meaningful learning in a rapidly changing digital landscape.
This policy applies to all students across all subjects and year levels at St Paul’s Collegiate School, covering the use of AI tools both during school hours and in assignments, homework, assessments, and other educational activities. Naturally, at the senior school, we are bound by NZQA regulations regarding breaches of authenticity, whereas in the junior school, we are not.
According to NZQA, “Authenticity is the assurance that evidence of achievement produced by a learner is their own.”
AI Tools: Software or platforms that generate content, provide analysis, or assist in academic tasks using artificial intelligence (e.g., ChatGPT, Grammarly, Quillbot, Google Gemini, Perplexity).
Authentic Work: Original work that reflects a student's own understanding, effort, and academic integrity.
Assisted Work: Work in which a student has used AI tools to brainstorm, check grammar, or receive feedback, with appropriate acknowledgement.
Transparency: Students must clearly disclose any use of AI tools in their work.
Integrity: AI-generated content must not be passed off as original student work in summative assessments.
Equity: Access to AI tools should not disadvantage or unfairly advantage any student.
Learning-Focused: AI should support, not replace, the learning process.
AI tools may be used for example:
For grammar and spelling checks (e.g., Grammarly).
For brainstorming ideas or planning outlines.
For practice questions or revision aids.
When explicitly allowed by a teacher for formative tasks.
Students must:
Acknowledge any AI assistance in a footnote, bibliography, or teacher-provided declaration form. Ensure submitted work reflects their own understanding and voice.
AI tools must not be used to:
Generate full responses or essays for summative assessments or coursework.
Bypass thinking or original analysis in assignments meant to assess comprehension.
Cheat or plagiarise in exams or tests.
Create misleading or fabricated data, citations, or content.
Teachers will:
Clearly state when AI use is permitted or prohibited in assignments. ( see traffic light system)
Design tasks that prioritise critical thinking and originality.
Use AI detection tools, if needed, alongside professional judgment.
Educate students on ethical AI use and digital citizenship.
The use of AI technology in NCEA summative assessments is dependent upon the nature of the assessment and will be clearly communicated at the start of the assessment in the guidelines for the activity. AI text-generated content may be permitted if it aligns with the assessment criteria and does not compromise the integrity of the assessment. Any AI-generated work must be original, and the student must be able to explain how it was generated and demonstrate their understanding of the underlying concepts.
There are also assessments where the use of AI is explicitly not permitted. These include assessments that test the student's personal knowledge and understanding. In such cases, the use of AI technology will not be allowed, and the process for a breach of authenticity will be followed.
Students are expected to:
Submit authentic work that reflects their learning.
Use AI tools responsibly and ethically.
Disclose any AI use clearly and honestly.
Violations of this policy may be treated as academic breaches and will be addressed according to the school’s existing disciplinary procedures. Where there is a proven breach of authenticity, you will be given a Not Achieved result for the assessment concerned. Clear protocols are outlined in the student copy of the Summary of Assessment Policy on the Landing Page.
Sanctions may include:
Discussion with teacher/HOD/LOC/ Director of Digital Learning, Deputy Headmaster
Not Achieved grade awarded.
Parental notification via letter and via a meeting.
Further disciplinary action in serious or repeated cases
This policy will be reviewed annually to keep pace with developments in AI and education. Student and staff feedback will be considered in future revisions.