This AI Guideline is crafted to guide the AISJ community in the responsible integration of Artificial Intelligence (AI) into the educational landscape. In our context, AI refers to both generative AI tools (like Chat GPT) and other AI-enabled tools.
AISJ is committed to the responsible, legal, and ethical use of AI tools to enhance teaching and learning. It's essential for our students to acquire new literacies to navigate the evolving digital landscape effectively. By adopting AI technology thoughtfully and deliberately, we have the opportunity to accelerate educational processes, enhance feedback mechanisms, and more adequately meet the diverse needs of our students. We need to keep in mind our uniquely human competencies, grapple with the potential ethical and moral implications of AI, and keep our focus on a student-led, teacher-framed pedagogy. It's critical to explore how our curriculum, teaching methods, and assessments can adapt to leverage these potent tools in a way that translates into deeper levels of student engagement and intellectual growth.
1.1 Third-Party Al Tools: At AISJ, we are committed to promoting a secure and ethical application of Al while upholding our data privacy protocols. To ensure conformity with our policies, we ask that teachers consult with Tech to verify the safety of any Al tool before utilizing it in the classroom. It is important to note that not all Al tools may comply with our data privacy standards. For more information please visit the AISJ AI Portal
1.2 Personal Information: Many chatbots enhance their capabilities by learning from user input. This process allows the system to evolve based on the information provided by students and teachers. To safeguard privacy, it's crucial to refrain from incorporating any personally identifiable information in the prompts submitted to an AI system. Reach out to Tech or your TiC for more information and how to proceed.
1.3 Safety & Respect: It is strictly prohibited for users to use Al tools to create or spread content that is harmful, misleading, or inappropriate. This rule is also present in our student code of conduct. Additionally, users are not allowed to use AI tools to mimic or replicate another user's likeness in text, pictures, audio, or videos without their consent.
1.4 Data Collection: Parents, guardians, and students will receive information on data collection initiatives, and consent will be sought when necessary. All AI-driven data collection will comply with local data protection regulations and best practices.
2.1 Access to Al: We believe in providing access to tools that support learning. Ongoing professional development for teachers will be prioritized to keep them updated on the latest Al tools and best practices.
2.2 Bias and Misinformation: AI chatbots such as ChatGPT are prone to what are known as “hallucinations.” While most of their output is very useful, they are prone to making errors or fabricating very believable untruths. This is especially true when they cite sources or solve mathematical problems.
2.3 Al Output Review: Always review and critically assess outputs from Al tools before submission or dissemination. Staff and students should never rely solely on Al generated content without review. Educators will ultimately make decisions about students to ensure fairness.
3.1 Surveys and Feedback: Periodic surveys and feedback mechanisms will be implemented to assess the impact of Al tools, informing data-driven decisions.
3.3 Learning and Upskilling: Workshops will be offered periodically, and the Tech Integration Coaches are available to provide assistance in the classroom as needed.
4.1 Proper Use: Students can use AI tools as personal tutors or as learning assistants. They can get rapid and useful feedback. They can get help with research tasks or have complex information summarized at their personal comprehension level. We need to ensure that we aren’t encouraging academic laziness or allowing algorithms to make decisions that are best made by students or teachers.
Misuse or malicious use of Al technologies will lead to disciplinary action.
4.2 Assignments: Teachers are responsible for clarifying appropriate or prohibited uses of Al tools. Teachers might allow the limited use of generative Al on entire assignments or parts of assignments. Teachers will use the trafficlight framework to indicate the level of AI use the student is allowed to use. Any use of Al to aid assignments, projects, or research must be declared or cited.
4.3 Assessments: Al tools may be used as a tutor or studying assistant to prepare for assessments, such as exams or quizzes, but not in the context of completing exams or quizzes unless explicitly stated.
5.1 Generative AI Use: Academic staff are permitted to use Generative AI tools (such as Gemini) for lesson planning, resource generation, and assessment creation. When using AI tools, all personal student data must be minimized and anonymized to ensure full compliance with POPIA guidelines. Teachers are also expected to critically review all AI-generated resources for algorithmic bias, ensuring all classroom materials remain inclusive, accurate, and culturally responsive. AI tools may be used for drafting report card comments with the strict expectation that the final comments are reviewed and personalized by the leading teacher.
Example practical use cases for staff include:
Drafting differentiated reading materials for diverse reading levels.
Generating real-world data sets or coding simulations for STEM practice scenarios.
Brainstorming diverse essay prompts, project ideas, and discussion questions.
5.2 Professional Boundaries & High-Stakes Decisions: While AI tools are permitted for administrative productivity, they must not be used as the sole decision-maker for high-stakes professional tasks. This includes:
Performance Reviews: AI may assist in drafting, but the final evaluation must be based on direct human observation and professional judgment.
Confidential HR Matters: Personal staff grievances or sensitive disciplinary files must never be processed through third-party AI tools.
Admissions: AI should not be used to 'score' or filter applicants without human oversight and clear disclosure to the community.
6.1 Citations and Disclosure: Any AI-generated content or assistance used in assignments must be appropriately cited; its use must be fully disclosed and explained. Disclosure isn't just for chatbots like ChatGPT or Gemini; you must acknowledge the use of any AI-enabled tool that contributes to your final product, including writing assistants (like Grammarly) or transcription tools. When you declare your AI use, clearly explain how the tool assisted you in your learning process. Instead of just listing the name of the AI, describe its specific role. Always place your AI acknowledgment statement in a clearly visible location on your final assignment, such as at the end of your document, within your bibliography, or in the submission comments.
Some examples of this acknowledgment might be:
I used Grammarly to check spelling and grammar.
I had a conversation with ChatGPT to review my understanding of Antigone before beginning work on the assigned project.
I asked ChatGPT to generate hundreds of ideas for my project before choosing the one that I wanted to pursue.
I used the generative AI feature in Grammarly to suggest counterarguments to my opinion.
I used Google Voice Typing to transcribe a handwritten first draft.
I used Bing in Creative Mode as a personal writing tutor to provide feedback on my first draft.
6.2 Plagiarism: Al tools may be used for brainstorming or preliminary research, but using Al to generate answers or complete assignments without proper citation or passing off Al-generated content as one's own is considered plagiarism. Those who fail to cite properly will be subject to consequences outlined in the school's student handbooks.
7.1 Reporting an AI Concern
AISJ encourages a culture of transparency. If a student or staff member encounters an AI output that is biased, harmful, or inappropriate, or if they suspect a data privacy breach (unintentional input of PII), they should report it immediately. Reports can be made by sending an email to AISJ-Technology@aisj-jhb.com
7.2 Incident Response Roadmap
In the event of a reported AI policy violation (e.g., misuse of data or harmful content creation), the school will follow these steps:
Initial Review: The technology team will report it to the Tech Director, the Tech director will advise school administration where necessary.
Contextual Evaluation: The Tech Director will determine if the violation was a result of 'AI Hallucination' (system error), a misunderstanding of guidelines, or intentional malice.
Tool Audit: If the issue stems from the AI tool itself (e.g., systemic bias), AISJ will notify the vendor and provide guidance to the community on the tool's limitations.
7.3 Restorative Practices & Re-Teaching
Aligned with our mission to foster a growth mindset, AISJ prioritizes restorative over punitive measures for first-time AI policy infractions. Our response focuses on:
Reflective Conversation: Discussing the 'why' behind the policy and the ethical implications of the misuse.
Re-Teaching: Providing targeted AI Literacy training to ensure the user understands the boundaries (e.g., Traffic Light levels).
Opportunity for Correction: In cases of academic integrity concerns, students may be given the opportunity to reflect on their process and re-submit work that demonstrates their unique human voice.