AI carries the risk of algorithmic bias, influencing any materials it generates.
Important administrative decisions, like student program enrollment or discipline should not be utilizing AI.
AI detectors often incorrectly identify non-native English speakers' texts as AI-generated, necessitating human oversight.
Academic Integrity
Educators must address the potential for increased cheating with AI by rethinking assignment types and explicitly setting guidelines for AI use to uphold academic integrity.
Plagiarism detectors should not be used exclusively when determining if plagiarism has occurred. Teachers should continue scrutinizing student work and encouraging AI use for learning processes, not just final products, supplemented by regular teacher-student check-ins.
Privacy & Security
When using an AI platform, one should ensure their data and training settings are appropriately not collecting user data and information. If guidelines of data collection are not clear, defer to your technology department for analysis. CLICK HERE FOR INSTRUCTIONS ON HOW TO PROTECT YOUR DATA IN CHAT GPT.
The intersection between AI and FERPA should be examined to ensure that student data is not compromised.
Transparency
Just as teachers will ask students to cite ChatGPT if it is used as a source or as a part of their working process, so, too, should teachers cite their own use of ChatGPT in the creation of class or administrative materials. Transparency in the use of AI should reach across all levels.
Inaccuracies
Validating AI’s output needs to be a part of the teacher and student education process.
A knowledge of how to fact-check AI generated information against reliable sources should be a part of any student or teacher’s toolkit, and fact-checking AI generated materials should be an integral part of the process of materials creation.
Unintended Outcomes
Overuse of AI in education could diminish the focus on Social Emotional Learning and lead to reduced agency, accountability, and critical thinking.
Dependence on AI may result in compromised judgment, as highlighted by studies showing people following flawed robot instructions, indicating a need for teacher training in exercising human judgment.
Image created by Chat GPT: DALLE-3
Image created by Chat GPT: DALLE-3
Impacts on Equity
If we do not want to increase the disparity between student groups, we must be attentive to the inherent biases within AI that could further negatively impact equity.
When the data sets that AI uses are not representative, patterns of inequity can be exacerbated.
A focus on asset-oriented (rather than deficit mindset) AI tools will be essential.
Pedagogical Impacts
Maintain a "human in the loop" approach where teachers are central to decisions on AI use, ensuring it complements rather than replaces pedagogy.
Evaluate AI in the classroom for its pedagogical effectiveness and alignment with educational goals like enhancing student learning and creativity, rather than using it for novelty.
Provide training for stakeholders on overriding AI for educational purposes and adapt AI suggestions to fit the specific context and vision of the educational setting, while being mindful of the impact on Social Emotional Learning.
Considerations
Utilize AI for preparation, objective evaluation, and administrative tasks to increase teacher-student interaction time, emphasizing relationship building.
Apply Kingsway's existing ed-tech policies to the adoption and use of AI in educational settings.
Teachers should avoid relying solely on AI-based machine learning, and instead triangulate AI data with learning theory and practical knowledge, as advised by the DOE.