AI in Education

This Google Site was originally created to complete two tasks:

It has now expanded to be an ongoing resource for educators on the subject of Artificial Intelligence and is being constantly updated with new information, please bookmark and check back often.

This website is currently maintained by: Thomas Rogers 

Feel free to contact with questions, comments and feedback.

Site Last Updated: 13 December 2024 (Expand for details)

Updates are listed newest to oldest.

Introduction

What is commonly referred to as Artificial Intelligence (AI), has and will continue to make its way into the existing educational technologies (Ed Tech) that we use in our classrooms. Currently, there exists no realistic policy to completely exclude their use from the software that we have come to rely on and use in the profession of teaching. Teachers, schools and Divisions need to be ready to give a direction or framework to help students and teachers navigate this new reality. These guidelines should focus on the use by  humans rather than trying to regulate every variation of the technology 

Educational technology companies continue to adopt AI at a rapid scale and it is unlikely that we will reverse our participation from existing technologies that choose to integrate it, let alone shy away from potentially new and better ways of providing instruction or handling administrative tasks. The technology industry has also made significant changes to include AI such as the first change to the Windows keyboard in 30 years and the addition of the Neural Processing Unit (NPU) to computer architecture.

The other point of consideration is our role to play in the creation of AI and how it can help us challenge some deep-rooted issues of an antiquated education system. Consider the following:

Education stands to be changed significantly by the presence of AI whether we want it in our classrooms or not. Many Ed Tech tools that we have already invited into our classroom now have more robust AI built in and more and more are adopting the use of this technology each month.  Some of these that directly relate to Edmonton Public Schools include:

A framework is required to navigate this landscape that is responsive to the changes still occurring in this space, while also maintaining the rights of the student and the integrity of the educational profession.

What does this mean?

The "Problem"

What does this mean?

The perception with many educators is that AI has arrived in the sphere of education before we were "ready" for it and its rapid pace makes it daunting to try and keep up with that which seems to be constantly changing. While AI has been around for longer than perhaps many educators understand, the emergence of language models and more "human-like" interactions have impacted our impressions as a society. There is a need to create guidelines, policies or frameworks to help us make decisions that will be ethical, informed and in the best interest of the student. (Carvalho, Tsai & De Laat, 2022) In short, a starting point is needed to begin our relationship with AI as educators.

The other potential problem that could be addressed is the longstanding concern over the stagnation of some methods of instruction within the profession. The COVID-19 pandemic began to show that the profession of education was susceptible to disruption when in-person direct instruction was not possible. The general availability of AI like Chat-GPT has the potential to be more disruptive than COVID-19, smartphones or even Chromebooks. Since a step backward or pause is not likely or possible, it will force educators, much like the pandemic did, to look at how they teach and be compelled to make a change in their methods of instruction or face another battle to maintain the status quo.

“Did ChatGPT kill assessments? They were probably already dead, and they’ve been in zombie mode for a long time. What ChatGPT did was call us out on that.”

Richard Culatta, CEO of the International Society for Technology in Education (ISTE)

Lastly, as research is conducted, there is a growing body of evidence to suggest that students and teachers can potentially benefit from the use of Chat-based AIs in a variety of settings and applications.  (Wu & Yu, 2023)

Carvalho, L., Martinez-Maladonado, R., Tsai, Y-S., Markauskaite, L. and De Laat, M. (2022) How Can we Design for Learning in an AI World?   Computers in Education: Artificial Intelligence, Volume 3, 100053. https://doi.org/10.1016/j.caeai.2022.100053 
Heaven, W. D. (2023, April 6). ChatGPT is going to change education, not destroy it. MIT Technology Review; MIT Technology Review. https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/

Wu, R., & Yu, Z. (2023). Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. British Journal of Educational Technology, 00, 1–24. https://doi-org.login.ezproxy.library.ualberta.ca/10.1111/bjet.13334 

AI has been described using human characteristics more than any other technology, inviting comparison (Placani, 2024). This leads us to have conversations around some deeper and more complex topics surrounding on how we as humans relate to technology. Some of these lenses include:

As suggested by Pickering et al (2017), there are several factors that impact our trust in technology which are illustrated from the visuals from their paper shown on the right.

We tend to base our trust in technologies on the level of regulation or control we have over it and the perceived risks of its use.

Are we seeing an “unregulated human actor” in AI when we get concerned about this technology working its way into domains we traditionally see as exclusively human?

What we have is a technology that we are describing with human-like qualities, that is not human and is nothing like a human. We run the risk of "overattribution" of their abilities when assigning human capability to the machine (Marcus & Luccioni, 2023).

Placani, A. (2024). Anthropomorphism in AI: Hype and fallacy. AI and Ethics. https://doi.org/10.1007/s43681-024-00419-4Marcus, G., & Luccioni, S. (2023, April 17). Stop Treating AI Models Like People. Substack.com; Marcus on AI. https://garymarcus.substack.com/p/stop-treating-ai-models-like-people 

"Out of this [comparing computers to people] came the concept of artificial intelligence and the other anthropomorphic forms now commonly used to describe computer performance. We suggest that you must impose clearly unrealistic limits on your own abilities before you can make such comparisons. The machine does not understand. It cannot know or plan. It has no judgment that you can trust to look out for your best interests. The machine has no such interests. This is a problem in semantics carried to a dangerous extreme. Computer simulation (essentially a complex of labels) will never be the thing at which it points in our flesh-and-blood universe. No complex of computer circuits nor of our brain cells can ever be that thing which the symbols try to describe."

-Frank Herbert

Herbert, F., & Barnard, M. (1981). Without me you’re nothing: The essential guide to home computers. Pocket Books.
Pickering, J. B., Engen, V., & Walland, P. (2017). The Interplay Between Human and Machine Agency. In M. Kurosu (Ed.), Human-Computer Interaction. User Interface Design, Development and Multimodality (Vol. 10271, pp. 47–59). Springer International Publishing. https://doi.org/10.1007/978-3-319-58071-5_4

Proposed Solution

The Proposed Solution is to establish a framework that allows Artificial Intelligence a place to be included and at the same time, promote long overdue reform to educational practices that we know aren't effective; or the exploration of new ways of learning. It is an opportunity to encourage a transition to more engaging and meaningful work for students to do without a significant increase to the teacher workload.

"Traditional assessment methods need to evolve to account for the availability of AI language models. Developing assessment approaches that focus on critical thinking, problem-solving, and synthesis of information, and direction citation of sources can help ensure that students are not solely relying on AI-generated content." (Dobson 2023)

The Proposed Solution is outlined on the Suggested Framework page of this website.

Dobson, T. (2023). Lecture: Academic Integrity and Generative AI - Engaging the Complexities at an Institutional Level [PowerPoint Slides], University of British Columbia

What does this mean?

Implementation Plan

The first phase of implementation is to roll out the framework beginning September 2023 in three key areas:

It will also involve the deployment of this website to assist teachers and administrators to implement this framework and stay updated on key information in a centralized and curated location. Resources will be collected, reviewed and deployed based on the needs of the school; both students and staff.

Implementation will be reviewed each month or on a case-by-case basis as needs arise. The "Three Horizons Projection on AI in Education (Aug 2023)" will be used as a model to attempt to predict and scale when specific supports may be needed or become available at the school level.

Three Horizons on AI in Education (August 2023)
Adams, C., & Groten, S. (2022). A TechnoEthical Framework for Teachers. Faculty of Education, University of Alberta.Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542–570. https://doi.org/10.1111/ejed.12533McKinsey & Company. (2009, December). Enduring Ideas: The three horizons of growth. McKinsey & Company; McKinsey & Company. https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/enduring-ideas-the-three-horizons-of-growthRogers, M.E. (2003) Diffusion of Innovations. 5th Edition, Free Pass, New York. 
UNESCO. (2023). Artificial intelligence and the Futures of Learning. https://unesdoc.unesco.org/ark:/48223/pf0000376709

What does this mean?

Risks and Mitigation Strategies

The primary risk as far as the Division is concerned is around the matters of data, privacy and consent. This set of "instrumental technoethics" has already been put in place (Adams & Groten, 2022) by the Division. These documents are referenced on the Suggested Framework page as well as the Division Resources page. 

The secondary risk is the possible impact on staff and student learning and well-being in the classroom. This is a very close second and only comes second due to the existing legal policies that govern the work of educators. These secondary risks are also somewhat difficult to predict as adoption has not occurred at a scale large enough for the impact to be observed. (UNESCO 2023) Further exploration is needed to see the impact it will have through a "sociomaterial technoethical" lens, or how it will change and modify our behaviours: consciously and subconsciously. (Adams & Groten, 2022)

Another factor that will be to be explored and considered is parent communication and parent involvement. All of the frameworks will need to be shared with parents and will need to have a mechanism for feedback on their perceptions, knowledge and attitudes towards the introduction of more advanced AI in their child's education.

Adams, C., & Groten, S. (2022). A TechnoEthical Framework for Teachers. Faculty of Education, University of Alberta.UNESCO. (2023, July 10). Generative Artificial Intelligence in education: What are the opportunities and challenges? Unesco.org. https://www.unesco.org/en/articles/generative-artificial-intelligence-education-what-are-opportunities-and-challenges

"To fully harness the potential of high quality and safe generative AI, schools will need to be supported in understanding and appropriately managing a range of privacy, security and ethical considerations. Risk management should also be appropriate for the potential consequences. These consequences include the potential for errors and algorithmic bias in generative AI content; the misuse of personal or confidential information; and the use of generative AI for inappropriate purposes, such as to discriminate against individuals or groups, or to undermine the integrity of student assessments."

Australian Government. (2023, November 17). Australian Framework for Generative Artificial Intelligence (AI) in Schools - Department of Education, Australian Government. Department of Education. https://www.education.gov.au/schooling/resources/australian-framework-generative-artificial-intelligence-ai-schools

What does this mean?

Evaluation and Measurement

The level that this framework aims to achieve is a positive co-existence with AI in education, specifically in the areas of instruction and policies of a school. The current level of planning in this framework is best described as "Projected Transformation" which are "scenarios and projections of present and future developments." (Flyverbom & Garsten, 2021)

Through this introductory framework, the intent is it will become a possible basis for a longer-standing framework or policy that others in the Division or profession may find valuable. It is designed as a logical starting point to be modified and adapted as the landscape continues to develop around AI in education.

A series of Google Form surveys will be produced and conducted with the key groups: students, staff and parents to collect feedback on this process.

Through observations, conversations and survey data:

Other indicators may show themselves throughout the school year.

Flyverbom, M., & Garsten, C. (2021). Anticipation and organization: Seeing, knowing and governing futures. Organization Theory, 2(3), 1–25.

Conclusions

The purpose of this Suggested Framework is to give a starting point for a technology that has the potential to greatly transform the profession of education as well as the curriculum. While it is too soon to say for certain what will and what should become policy, this Framework is designed to give an informed starting place for September 2023.

Supporting Materials

Please see the Information for Teachers, Division Resources pages as well as the Works Cited pages. Citations are included throughout this site.