Framework for AI in Education
General Guidelines and Considerations
Just like the Smartphone and the affordable laptop (Netbook, Laptop, Chromebook) have changed education from kindergarten to University, Just like how we teach students to type, left-click and be good digital citizens, we are going to be moving into a technological advancement where we are going to:
need to teach students how to be an Augmented-Creator and the potential role of prompt engineering.
create more thought-provoking learning to use the information at our fingertips.
experience educational reform that takes into account the impact of AI.
A few items that must be considered:
Student use of AI must be legal. Minimum age requirements for example to use an application must be met.
Student privacy must not be compromised and be protected as outlined by Division policy and Provincial legislation. (Edmonton Public Schools, 2022)
See the FOIP and Personal Information section below for more details and your requirements as an educator.
The use of AI cannot be required if equitable access cannot be established.
Like any educational technology tool, accommodations must be available if the tool cannot be used for any reason.
Be explicit about how you invite AI into the classroom and its use during the assessment phase.
The use of AI should not be directly assessed unless a direct and clear link to the Program of Studies is established.
Rubrics and other methods of quantifying student achievement should not be impacted by AI.
Due to the rapid developments that are occurring in the area of AI development, having specific, concrete policies on AI may lead to a "web we cannot untangle" at a later date as AI technology changes. Policies and frameworks need to remain flexible and responsive (Klein, 2023).
What does it mean?
What it means to create in relation to tools that augment human abilities isn't a new frontier but how we think about it is changing.
"Prompt engineering" means when dealing with large language modles (LLMs), asking the right questions to get higher quality responses.
A note for Alberta Educators
4.4.0.0 IMMEDIATE DIRECTIVES
4.4.0.1 Be it resolved that student safety and data privacy should be primary considerations in the use of artificial intelligence tools in the classroom. [2023]
4.4.0.2 Be it resolved that artificial intelligence tools used in schools should be evaluated before implementation for ownership of data, bias, discrimination, accuracy and potential for harm. [2023]
4.4.0.3 Be it resolved that understanding of artificial intelligence benefits and concerns, including algorithms and data collection/ use, should be part of technology use in schools. [2023]
A note on First Steps
TeachAI.org provides a series of steps and resources that teachers, schools and divisions can walk through to make sound initial decisions on AI.
Stage 1 described in the image to the left and its wording is specific and should be unpacked carefully. A framework or policy that is created needs to be one that is flexible as we are still in the early days of AI and its impact on education.
The risks discussed in Stage 1 are outlined in the second image. Note that these are surrounding concepts like:
plagiarism
accountability
privacy
overreliance
societal bias
The framework contained in this page is an evolving document that will attempt to address all of these points and more as we as educational professionals navigate AI in the educational space from the persective of educators and students.
A note on Bias
Bias exists in all sources of information and AI is no different. It is easily traced using four basic assumptions:
Software and Hardware are made by humans.
Humans have several biases that will influence development decisions.
Those biases, therefore, transfer to the software and hardware being developed.
The software or hardware provides a consistent experience to the user that includes built-in biases.
For this reason and others, including the overall faults possible with the algorithms that drive AI and other software, a "trust but verify" approach is advisable like all other levels of research conducted by staff and students.
Be aware that not all AI systems are created equal and do your homework on what they are using for a basis of information, for example, the language model the tool might be using. Like all research methods, information should be verified by multiple sources to improve accuracy. This also needs to be taken into consideration when the potential impacts of students accessing, using and participating in the use of AIs and LLMs.
A note on Indigenous Perspectives
Section under construction. There is currently not a significant body of research on Artificial Intelligence, Indigenous Perspectives and K-12 Education.
Guidelines for Indigenous-centred AI Design V.I (Lewis, et al, 2020, p. 20-22)
Locality: Indigenous knowledge is often rooted in specific territories. It is also useful in considering issues of global importance. AI systems should be designed in partnership with specific Indigenous communities to ensure the systems are capable of responding to and helping care for that community (e.g., grounded in the local) as well as connecting to global contexts (e.g. connected to the universal).
Relationality and Reciprocity: Indigenous knowledge is often relational knowledge. AI systems should be designed to understand how humans and non-humans are related to and interdependent on each other. Understanding, supporting and encoding these relationships is a primary design goal. AI systems are also part of the circle of relationships. Their place and status in that circle will depend on specific communities and their protocols for understanding, acknowledging and incorporating new entities into that circle.
Responsibility, Relevance and Accountability: Indigenous people are often concerned primarily with their responsibilities to their communities. AI systems developed by, with, or for Indigenous communities should be responsible to those communities, provide relevant support, and be accountable to those communities first and foremost.
Develop Governance Guidelines from Indigenous Protocols: Protocol is a customary set of rules that govern behaviour. Protocol is developed out of ontological, epistemological and customary configurations of knowledge grounded in locality, relationality and responsibility. Indigenous protocol should provide the foundation for developing governance frameworks that guide the use, role and rights of AI entities in society. There is a need to adapt existing protocols and develop new protocols for designing, building and deploying AI systems. These protocols may be particular to specific communities, or they may be developed with a broader focus that may function across many Indigenous and non-Indigenous communities.
Recognize the Cultural Nature of all Computational Technology: All technical systems are cultural and social systems. Every piece of technology is an expression of cultural and social frameworks for understanding and engaging with the world. AI system designers need to be aware of their own cultural frameworks, socially dominant concepts and normative ideals; be wary of the biases that come with them; and develop strategies for accommodating other cultural and social frameworks. Computation is a cultural material. Computation is at the heart of our digital technologies, and, as increasing amounts of our communication is mediated by such technologies, it has become a core tool for expressing cultural values. Therefore, it is essential for cultural resilience and continuity for Indigenous communities to develop computational methods that reflect and enact our cultural practices and values.
Apply Ethical Design to the Extended Stack: Culture forms the foundation of the technology development ecosystem, or ‘stack.’ Every component of the AI system hardware and software stack should be considered in the ethical evaluation of the system. This starts with how the materials for building the hardware and for energizing the software are extracted from the earth, and ends with how they return there. The core ethic should be that of do-no-harm.
Respect and Support Data Sovereignty: Indigenous communities must control how their data is solicited, collected, analysed and operationalized. They decide when to protect it and when to share it, where the cultural and intellectual property rights reside and to whom those rights adhere, and how these rights are governed. All AI systems should be designed to respect and support data sovereignty. Open data principles need to be further developed to respect the rights of Indigenous peoples in all the areas mentioned above, and to strengthen equity of access and clarity of benefits. This should include a fundamental review of the concepts of ‘ownership’ and ‘property,’ which are the product of non-Indigenous legal orders and do not necessarily reflect the ways in which Indigenous communities wish to govern the use of their cultural knowledge.
Sources to be included:
Lewis, J. E., Abdilla, A., Arista, N., Baker, K., Benesiinaabandan, S., Brown, M., Cheung, M., Coleman, M., Cordes, A., Davison, J., Duncan, K., Garzon, S., Harrell, D. F., Jones, P.-L., Kealiikanakaoleohaililani, K., Kelleher, M., Kite, S., Lagon, O., Leigh, J., & Levesque, M. (2020). Indigenous Protocol and Artificial Intelligence Position Paper. Concordia.ca. https://spectrum.library.concordia.ca/id/eprint/986506/7/Indigenous_Protocol_and_AI_2020.pdfA note on Ethics
Ethical considerations of AI use and inclusion are an ongoing topic. A recent study by Adams et al, identified some additional areas of ethical consideration to add to a previously existing study that was conducted in a pre-Chat-GPT world (Jobin et al, 2019). Additional policy documents were included in this summarized analysis:
World Economic Forum (2019) Generation AI: Establishing Global Standards for Children and AI;
The Institute for Ethical AI in Education (IEAIED, 2021a) The Ethical Framework for AI in Education;
United Nations Educational, Scientific and Cultural Organization (UNESCO, 2021) AI and Education: Guidance for Policymakers; and,
United Nations Children's Fund (United Nations Children's Fund, 2021a, United Nations Children's Fund, 2021b) Policy Guidance on AI for Children.
Between Jobin et al and Adams et al, the following list of areas was identified for Ethical Consideration involving AI in K-12 Education:
(Jobin et al, 2019)
Transparency,
Justice and fairness,
Non-maleficence,
Responsibility,
Privacy,
Beneficence
Freedom and Autonomy
New Items were: (Adams et al, 2023)
Pedagogical appropriateness,
Children's rights
AI literacy
Teacher Well-being
Pedagogical appropriateness,
AI use at different levels need to be considered against what is developmentally appropriate for the student at each stage of their learning.
Children's rights
The rights of the student must be maintained. This includes items like privacy, data and FOIP concerns.
AI literacy
Students need to understand basic knowledge about the systems that they interact with and use. This includes items like bias.
Teacher Well-being
Being mindful of increased teacher workload is critical. Professional development is needed and time needs to be set aside to ensure this is able to occur.
FOIP and Personal Information
Most AIs will require some sort of sign-up process for users to undergo, whether they are students or teachers.
Teachers must be fully aware of the EULA that governs each application and the Privacy Impact that it has on student and Division information. For example, students must meet the minimum age required to use the application according to the EULA or Terms of Service/Use. Division policy regarding third-party applications, as seen to the right, must also be adhered to at all times.
Since AIs often take advantage of user data, this also must be taken into account when selecting and using these tools. In some cases, this will mean some systems will not be permitted for use.
FOIP Checklist for Third Party Apps
Do you have parental approval confirmed by a signed District FOIP form for all students who will be using the online tool or app?
The answer must be yes.
Have you notified parents about the use of the online tool or app?
The answer must be yes - and how and what they are notifying parents (parent letter, class newsletter, posting on SchoolZone, parent-teacher information meeting, etc…)
What is your plan for those students who cannot use the app or online tool, as parents have not given consent?
Must have a reasonable plan.
How do you intend to clean up all the personal information stored on the online tool or app when the data is no longer needed?
Must have a reasonable plan, and have tested that they can actually delete data that is uploaded to the online tool or app.
Does this online tool or app market directly to parents, other teachers or students?
If direct marketing is involved, the online tool or app must NOT be used.
What does this mean?
In EPSB we already have policies we need to follow for app use that apply to the use of AI tools for students.
Follow the existing directives in this area to ensure we remain FOIP compliant and keep parents informed of ed-tech use in our classrooms.
FOIP also needs to be considered by teachers when they are using an AI-powered tool in conjunction with student work. What is being submitted and what is being stored/processed?
Recommendations for Policy
In this domain, the recommendation is clear: Follow Division and Provincial guidelines when dealing with third-party applications. Pay special attention to which tools use Single-Sign-On (SSO), Terms of Use and Privacy Impact Assessments. You must have permission from the parents of your students to use these applications as outlined in the Division policy above. Be prepared with an alternative method of instruction if permission is not acquired or revoked.
Remember that third-party applications that require use of a Google Chrome extension will need to be manually approved by the Division's TIPS team to be activated.
Course Outline Policies
The University of Delaware (2023) has four prewritten policies on AI:
Use prohibited
Use only with prior permission
Use only with acknowledgement
Use is freely permitted with no acknowledgement
There are some challenges with Options 1 and 4 as academic journals are permitting AI as a citable, legitimate source. This issue is further compounded by style guides at the university level not being able to agree on whether or not a large language model like Chat-GPT is a source or an author. (Dobson 2023)
Since AI is being integrated into search engines and other existing educational technologies that we expect students to use, prohibiting its use is simply not realistic.
Not acknowledging its use is equally problematic especially when it comes to evaluating student achievement.
The University of British Columbia requires a statement in the course syllabus on whether or not AI is permitted and how it may be used.
Recommendations for Policy
Regarding AI use in the classroom, it is advisable to be explicit and transparent about your expectations regarding how AI is to be used.
Be mindful of what and how you are assessing and how much impact that AI can potentially have on the outcome of that assessment.
It is not advisable to prohibit its use or rely on detection methods. It is also not advisable to freely permit its use without acknowledgement as this does not provide opportunities for training or learning.
As the educational profession continues to be moulded by these new developments, it is advised that a policy of "Use only with prior permission" is used to create a stance of "Use only with direction/invitation." Teachers will still need to be mindful of how they are assessing and be careful that if a tool were to be used and not detected how it may impact the ability to accurately report student performance on the outcomes being evaluated.
The University of British Columbia (2023) has adopted the stance of the use of AI being a course-level decision and states in their FAQ on Chat-GPT: "If using ChatGPT and/or generative AI tools has been permitted by the instructor, then instructors should make sure to convey the limitations of use and how it should be acknowledged and use should stay within those bounds."
The University of Alberta has now adopted an AI statement for all course outlines:
Use of AI on Assessment Tasks
AI is a tool that may aid learning. Any and all use of AI and AI tools in assessment tasks must be transparently and honestly referenced (see University of Waterloo AI-generated content and citation; How to Cite Chat GPT). In addition to the standard reference, include a note indicating what prompt or prompts were used. Failure to do so may be considered an act of cheating and a violation as outlined in the relevant sections of University of Alberta (November 2022) Code of Student Behaviour. When using AI, keep “hallucinations” in mind. Do not rely solely on AI as a source of information. (University of Alberta, 2023)
Course Outline Example Statement
Use of Artificial Intelligence (AI) and other related technologies
With the rise of more advanced forms of AI, especially generative chat models, it is important to acknowledge when these tools are being used and when their use may be detrimental to the learning process. Students are to check in with their teachers if and when AI use may be appropriate.
If AI use is identified during a summative assessment where it was not permitted or discussed with the teacher beforehand, it will be treated as an “Academic Integrity” issue as outlined in the School Assessment Plan.
What does this mean?
The acceptable use of AI needs to be made clear to students.
Teachers will be required to have a statement on their course outlines indicating how they intend AI and similar software is to be used in their classroom.
It is recommended to state that it is "assignment specific" and up to the direction of the teacher.
It is similar to when a student is and is not allowed to use a calculator in math class.
Assessment, Cheating, Academic Honesty and "Detection"
AI is best at tasks that are lower down on Bloom’s Taxonomy.
It retells what is has found based on human trained patterns.
It can mimic, depending on how well it is prompted, higher levels of Bloom's but there are ways to add a "Distinctive human skill" to the process to help creating a more authentic assessment. This is demonstrated in the infographic on the right.
https://ditchthattextbook.com/ai/
Miller, M. (2023, August 29). AI in the classroom: What’s cheating? What’s OK? Ditch That Textbook. https://ditchthattextbook.com/ai-cheating/Miller, M. (2022, December 17). ChatGPT, Chatbots and Artificial Intelligence in Education. Ditch That Textbook. https://ditchthattextbook.com/ai/Plagiarism ultimately has its roots in a failure to cite your sources and inspiration. If is it properly referenced, then what remains to be the challenge?
We also need to consider the complexity of the task we are asking students to do and what specifically we are assessing against the Program of Studies.
Detection Models are an arms race of "catch-up" and ultimately aren't reliable enough. (Gorichanaz, 2023) There is a risk of false positives that could be detrimental to the students and the teacher-student relationship. (Itrona, 2016)
Originality Reports in Google Classroom remain a more reliable tool for catching plagiarism, especially between students.
Dr. Sarah Eaton working with the University of Calgary summarized what a "postplagiarism" academic world may look like in the infographic to the left.
This infographic was included in INT D 710: Ethics and Academic Citizenship at the University of Alberta.
This YouTube video, "Cheating is a Skill" is a title to provoke conversation. It provides a short, ~6 minute overview of some of the challenges of looking at AI in education as simply just a shortcut or a tool used to cheat.
It asks questions about assessment practices, digital equity and best practices.
Recommendations for Policy
Continue to utilize tools like "Originality Reports" in Google Classroom, conversations and establishing relationships with students before large-scale assessment where AI use is encouraged or probable. Clear expectations on the use of AI in student work and assessment will also potentially reduce misuse if correct use is modelled and encouraged.
Administrative Task Automation
There are several administrative tasks that teachers are required to perform. There may be ways now and in the near future to automate sections of a task or whole tasks. (Mollick, 2023) While undergoing this work, educators must be mindful of ALL legal responsibilities of the teaching profession as well as safeguarding student information and wellbeing. Some examples may include:
AI-Assisted lessons plans and teaching resources
Assistive technologies for writing like GrammarlyGO or Chat GPT
Digital assistance or planning software
Teachers should be aware that any generated material needs to be scrutinized for quality and accuracy. Also, just like students, teachers will need some skill in prompt engineering or asking the right questions and follow-up questions.
Another item to consider is what information we are feeding into the applications we are using. For example, we need to be mindful of how AI-powered tools and any EdTech tools use student data and make sure we are making choices that protect student information. Teachers need to be reviewing any Privacy documentation and Terms of Use/Service to ensure proper steps are being taken to safeguard student information. Student information should be completely redacted prior to uploading any student work and only done so after a conversation with your principal or other decision-making authority.
Based on Aleksandr's work above as well as the graduate work completed by the maintainer of this website, this infographic was expanded and tailored for educators and their specific context.
Recommendations for Policy
Automation or use of an AI for a task in part or in whole that is the responsibility of a teacher remains the responsibility of the teacher, regardless of the assistive technology that may be employed. Teachers remain accountable for the information they access, produce and share. In some instances, the use of these technologies may increase the responsibilities and duties that an educator may be held accountable for and should weigh at all times the benefits and drawbacks of the process.
Please share your ideas with an Administrative Team member to get some feedback and guidance on the above.
What does it mean?
The work that teachers do is challenging.
Teachers remain bound and are responsible to the TQS and Codes of Conduct outlined by the ATA, the Government of Alberta and their employer.
While AI, automation and assistive technology can help us in our duties, it cannot be responsible or accountable for our work; that remains in the domain of the educator.