A Primer on AI and Education
A primer on artificial intelligence to deepen your understanding of AI and its implications for learning, teaching, and education.
A primer on artificial intelligence to deepen your understanding of AI and its implications for learning, teaching, and education.
AI is rapidly transforming many aspects of our lives, and education is no exception. Educators, students in pre-K–12 and higher education, workers seeking to upskill or reskill, and informal learners of all ages increasingly engage with digital experiences. As a result, they generate enormous amounts of multimodal data, such as logfiles; audio, video, and text files; and eye tracking data. Analyzing those data through AI techniques such as machine learning, computer vision, and natural language processing can answer instructional and administrative questions, discover new and non-obvious relationships and patterns, predict learning outcomes, and automate low-level decisions.
Simultaneously, the AIED market is growing rapidly, with the worldwide AI in Education (AIED) market projected to reach more than $20 billion by 2027. Despite the growing interest in and market for AIED, there is also a great deal of confusion and misunderstanding about what AI is and how it can be used in education. A nationally representative sample of teachers, principals, and district leaders surveyed by the EdWeek Research Center in May and June of 2023 found that only 1 in 10 educators say they know enough basics about AI to teach it or use it to some degree in their work. And while most educators agree that AI is a priority, almost 9 in 10 say they have not received any professional development on how to incorporate AI into their work in K-12 education.
Many prominent scholars, organizations, and governments have summarized the potential benefits of AIED. For example, the U.S. Department of Education Office of Educational Technology identified the following examples of AI as enablers for education.
New forms of interaction: Students and teachers can speak, gesture, sketch, and use other natural human modes of communication to interact with computational resources and each other. This can provide support to students with disabilities and help educators address variability in student learning. For example, AI-enabled educational technology can adapt to each student's English language abilities with greater support for the range of skills and needs among English learners.
Human-like responses: AIED can generate human-like responses, which can be helpful for students who need extra support or who are learning at their own pace.
Powerful forms of adaptivity: AIED can adapt to a student's learning process as it unfolds step-by-step, not simply providing feedback on right or wrong answers. This can help students continue strong progress in a curriculum by working with their strengths and working around obstacles.
Enhanced feedback loops: AIED can increase the quality and quantity of feedback provided to students and teachers, and suggesting resources to advance their teaching and learning.
Support for educators: Educators can be involved in designing AI-enabled tools to do their jobs better and to enable them to engage better and support their students.
One of the 14 Grand Challenges for Engineering identified by the National Academy of Engineering is to advance personalized learning. The goal of this challenge is to create resources that develop self-directed, life-long learning. Personalized learning is learner-centered, focused on and demonstrated by the learner's needs and interests, and connected meaningfully to their peers, mentors, and the community. It is an enabler of educational equity, ensuring that all learners have access to the resources and rigor they need at the right moment on their learning path to succeed in school, work, and life, regardless of their race, gender, ethnicity, language, disability, sexual orientation, family background, or family income.
Over the past 15 years, the field of personalized learning has undergone significant advancements. This includes the increasing use of AIED to tailor learning experiences to individual students' needs and interests. There has also been a shift towards mobile and ubiquitous learning environments that facilitate education anytime, anywhere. Researchers and educators have placed greater emphasis on learner motivation and engagement as key factors in effective personalized learning. Personalized systems have become more sophisticated at accounting for multiple dimensions of learner differences, whether in knowledge, skills, or goals.
Personalized learning requires four essential capabilities, which are made possible by AIED.
Multimodal experiences and a differentiated curriculum based on universal design for learning principles: This means providing students with a variety of ways to learn and engage with the material, and designing instruction that meets the needs of all learners.
Student agency in orchestrating their learning: This means empowering students to take ownership of their learning and make choices about what they learn, how they learn it, and how they demonstrate their understanding.
Community and collaboration: This means creating a learning environment where students can support and learn from each other.
Guiding student progress through the curriculum based on diagnostic assessments: This means using ongoing assessment to track student progress and identify areas where they need additional support.
Juxtaposed to the opportunities of AIED are challenges that can hinder its successful performance in complex real-world environments such as classrooms and schools. These challenges, based on work completed by the National Academies, include:
Brittleness: AIED systems are typically only capable of performing well in situations that are covered by their programming or training data. When presented with new or unexpected situations, they may fail to perform as intended.
Perceptual limitations: AIED algorithms can struggle with reliable and accurate natural language processing in noisy environments. This can make it difficult for them to interact effectively with the real world.
Hidden biases: AIED systems created using a limited set of training data, or from biases within that data itself, may reflect those biases in their predictions and outputs. This can lead to unfair or discriminatory outcomes.
No model of causation: AIED systems typically do not have a causal model of the world. This means that they cannot predict future events, simulate the effects of potential actions, reflect on past actions, or learn when to generalize to new situations.
To help the field understand and organize the different ways in which AI is being used in education, scholars have developed frameworks and taxonomies of AIED. One such taxonomy divides AIED systems into three categories: student-focused, teacher-focused, and institution-focused.
Student-focused AIED systems include intelligent tutoring systems, AI-assisted apps, simulations, essay-writing tools, chatbots, and formative assessment tools. Most of these systems aim to provide personalized and adaptive learning.
Teacher-focused AIED systems include plagiarism detection, curriculum recommendation, classroom monitoring, assessment automation, and teaching assistants. These systems aim to augment human teachers.
Institution-focused AIED systems include tools for admissions, scheduling, identifying at-risk students, and exam proctoring. These systems support administration.
Still other scholars have defined what educators need to know about AI. The following is an example of the eight important things educators need to know about AI.
Generative AI is fundamentally different from prior technologies. It is very different from classical, rule-based applications in how it works and what it can do. Traditional AI systems are rules-based, meaning they only do what they are programmed to do. Generative AI systems, on the other hand, are built with machine learning algorithms that extract patterns from enormous data sets. This information is captured in digital neural networks and when a request is made to the AI model, algorithms use the information in the network to generate responses.
Generative AI can accomplish many tasks, often with surprising proficiency. The LLMs summarized in Table 1 are just the tip of the iceberg. Don't underestimate what generative AI can do now and what it will be able to do in the near future.
Educators have both high hopes and grave fears about AI. School communities—teachers, students, parents, administrators, and staff—and local and state policymakers need to work together to address these hopes and fears.
Generative AI is very different from human intelligence. It is important to understand these differences in order to not perpetuate misconceptions. Avoid attributing human characteristics to AI tools and look for ways AI can augment human capacities.
Generative AI has many limitations, and its use involves risks. Don't trust AI to provide accurate, unbiased, and appropriate information, or to protect the privacy and security of its users. Additional limitations include: (a) outdated information; (b) fabricated facts; (c) bias; (d) lack of cultural and linguistic sensitivity; (e) inappropriate responses to social and emotional issues; (f) deception and misinformation; (g) lack of transparency and explainability; (h) privacy and security concerns; and (i) providing students with answers to questions without requiring them to think critically. Schools and governments need to put safeguards in place.
Students can use generative AI in ways that enhance learning and in ways that hinder learning. Design lessons that use AI to engage students in meaningful learning experiences; avoid having AI do students' work for them.
AI can enable teachers to do more of what only teachers can do. Generative AI can provide powerful tools to enable teachers to be more effective with their students. Educators need opportunities to learn to use AI effectively.
Educators need to embrace AI to prepare students for their futures. AI is becoming increasingly important in the workplace, and students need to be prepared to use AI in their future careers. As John Dewey noted, "If we teach today's students as we taught yesterday's, we rob them of tomorrow."
A different way of understanding AIED is through comparisons, analogies, and metaphors. For example, scholars have compared AI to moonlight and human accomplishments to sunlight.
As the moon reflects the radiance of the sun, AI reflects what humans are capable of—both truthful insights and biased misinformation. AI is trained using existing data from the World Wide Web, which can lead to the problem of "garbage in, garbage out." Additionally, AI can generate responses that sound plausible but are factually incorrect, such as fabricating citations of research articles that do not exist.
Other scholars have compared AIED to vitamins, candies, antibiotics, and painkillers.
Vitamins: AIEDs are like vitamins in that they provide essential components to learning, such as motivation, focus, practice, and feedback. Just as a healthy lifestyle requires a variety of nutrient sources, effective learning demands a mix of AIED and real-world, hands-on activities. Relying solely on AIED without real-world experiences would be like consuming only vitamin supplements without a balanced diet.
Candies: AIED can also be like candies in that they can be fun and engaging, making the learning process more enjoyable. However, just as excessive candy consumption can lead to health issues, overindulgence in AIED without substance can hinder cognitive development. The key is to strike a balance between the allure of AIED and the depth of meaningful content.
Antibiotics: AIED can also be like antibiotics in that they target specific learning challenges and provide tailored solutions. Similar to how antibiotics combat infections, AIED can act as an educational antibiotic, addressing learning difficulties directly and offering tailored guidance. Just as antibiotics are prescribed with precision, so too must AIED solutions.
Painkillers: AIED can also be like painkillers by alleviating the pain of learning. When students encounter complex topics, AIED can provide relief by simplifying concepts, offering hints, and recommending alternative examples and experiences. However, just as painkillers provide temporary relief and not a permanent solution, AIED should be used as aids and scaffolds alongside the development of problem-solving skills and persistence.
In the January 25, 2024, episode of "5 Minutes With," the Educating All Learners Alliance and I discussed the pressing questions about algorithm builders, diversity, and the necessity for educators to responsibly navigate AI, emphasizing the importance of skepticism to ensure fairness and equity in the education community.
When I began writing on ethics and AI in the late 2010s, most of the articles I read focused on algorithmic bias, noting that the data used to train algorithms did not include a full and balanced representation of subgroups. For example, when Joy Buolamwini and Timnit Gebru investigated the racial, skin type, and gender disparities embedded in commercially available facial recognition technologies, they revealed how those systems largely failed to differentiate and classify darker female faces while successfully differentiating and classifying white male faces. The poor classification for darker female faces stemmed from the data sets used to develop the algorithms, which included a disproportionality large number of white males and few Black females. When they used a more balanced data set to develop the algorithm, it produced more accurate results across races and genders.
I agree that algorithmic bias is important, but it is not the whole story. What makes the work that Chris Dede, Beth Holland, Michael Walker, and I a significant contribution to the field is our holistic approach. In our most recent publication, "The Cyclical Ethical Effects of Using Artificial Intelligence in Education," we present a synthetic review of the literature on the ethics and effects of using artificial intelligence in education, which reveals five qualitatively distinct and interrelated factors associated with access, representation, algorithms, agency and interpretations, and citizenship depicted in Figure 1.
Figure 1. Five qualitatively distinct and interrelated factors associated with access, representation, algorithms, agency and interpretations, and citizenship
These factors create divides and ultimately cycles in which some categories of people benefit unduly, and others lose out, as depicted in Figure 2. For those who lose out due to discrimination, unequal access to power and opportunity, and other unfair or unjust practices, the divides can create a vicious cycle that perpetuates and potentially amplifies structural biases in teaching and learning, with significant impacts on life outcomes. However, increasing human responsibility and control over these divides can create a virtuous cycle that improves diversity, equity, and inclusion in education and increases the likelihood of positive impacts on life outcomes.
Figure 2. The cyclical effect of using artificial intelligence in education
We open our analysis by probing the ethical effects of algorithm divides and how teams of humans can plan for and mitigate bias when using AI tools and techniques to model and inform instructional decisions and predict learning outcomes.
We then analyze the upstream divides that feed into and fuel the algorithmic divide:
Access: Who does and does not have access to the hardware, software, and connectivity necessary to engage with AI-enhanced digital learning tools and platforms?
Representation: What factors make data either representative of the total population or over-representative of a subpopulation's preferences, thereby preventing objectivity and biasing understandings and outcomes?
After that, we analyze the divides that are downstream of the algorithmic divide:
Agency and Interpretation: How do learners, educators, and others understand the outputs of algorithms and use them to make decisions?
Citizenship: How do the other divides accumulate to impact interpretations of data by learners, educators, and others, in turn influencing behaviors and, over time, skills, culture, economic, health, and civic outcomes?
We conclude the article by discussing ways to increase educational opportunity and effectiveness for all by mitigating bias through a cycle of progressive improvement.
Of the many definitions of Artificial intelligence (AI), the one provided by UNICEF captures the technical and legal dimensions of AI in plain and actionable language.
AI refers to machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour by learning about the context.
Traditional AI is a subset of artificial intelligence that focuses on performing specific tasks using predefined algorithms and rules. It is trained on large datasets of labeled data (i.e., structured data) to learn the patterns in the data and use them to make predictions or generate outputs. Traditional AIs are experts in a single activity or a restricted set of tasks, but they cannot create anything new. Examples of traditional AI include voice assistants like Siri and Alexa, recommendation engines on Netflix and Amazon, and Google's search algorithm. These and other traditional AIs are trained to follow specific rules and do a particular job well, but they cannot create anything new.
Generative AI is the next phase in the evolution of artificial intelligence, focusing on creating models that generate new content, such as text, images, music, code, and video. The key difference between traditional AI and generative AI is that generative AI can create novel things.
A recent report by McKinsey & Company estimates that generative AI could add $2.6 to $4.4 trillion annually to the global economy, increasing the impact of all AI by 15 to 40 percent. About 75% of this value is expected to come from four key areas: customer operations, marketing and sales, software engineering, and R&D. Generative AI is also expected to have a significant impact across all industry sectors, with the greatest potential to transform knowledge work by automating tasks, augmenting employee performance, and creating new opportunities. Generative AI is likely to have the biggest impact on knowledge work activities that involve decision-making and collaboration. The capacity to automate activities that require applying expertise has increased from 25% in 2017 to almost 60% in 2023, while the ability to automate managing and developing talent has jumped from 15% to almost 50% over the same period.
One category of generative AI models is the foundation model. Foundation models are trained on massive datasets to acquire a broad knowledge base that can be adapted to specific purposes. This self-supervised learning method involves the model recognizing patterns and relationships within the training data.
Large Language Models (LLMs) are a category of foundation models that are extensively trained on text data. LLMs are versatile and can undertake a wide array of tasks, such as article writing, question answering, and unstructured data analysis. Table 1 summarizes some of the best-known LLMs.