Pedagogical insights and strategies to navigate AI's impact in education
The AI Risk Measurement Scale (ARMS) was designed by Katharina De Vita (Manchester Metropolitan University, UK) and Gary Brown (University of Greenwich, UK). The ARMS tool was created to assist faculty in assessing the risk of students misusing generative AI (genAI) in their assignments.
ARMS classifies assessments into five categories, ranging from 'very low' to 'very high' risk, according to the probability and impact of genAI misuse on academic integrity. Each risk level is accompanied by descriptions and examples, enabling educators to better gauge the potential for genAI use in various academic tasks.
We encourage yo to use the ARMS tool when designing assessments and other academic activities to evaluate the risk of generative AI misuse. This tool provides clear risk levels and examples to support integrity in academic work.
The Swiss Cheese model, adapted from James Reason's theory in risk management and subsequently applied to education by *Rundel et al., illustrates how academic integrity violations in education become more likely to occur when vulnerabilities (cheese holes) align.
An example would be a course in which all evaluative milestones, such as midterm and final exams, consistently use identical formats and structures. Additionally, the fewer the evaluative components in a course (i.e., "cheese slices"—exams, assignments, etc.), the easier it becomes for gaps to align, as fewer layers stand between opportunity and an integrity violation.
A recommended practice is to diversify assessment formats across the course by incorporating a wider range and greater number of assignments and exercises.
This approach limits predictability, introduces more checkpoints, and makes academic misconduct less likely. Some tasks can be automatically graded or left ungraded altogether, or integrated into a larger assignment—such as deliverables or sprints—where general feedback assess the overall task.
*Rundle, Kiata, Guy J. Curtis, and Joseph Clare. "Why Students Do Not Engage in Contract Cheating: A Closer Look." International Journal for Educational Integrity 19 (2023). https://doi.org/10.1007/s40979-023-00132-5.
Image created by Midjourney
Artificial Intelligence (AI) has brought about a sudden revolution in the world of education. In some cases, this transformation has rendered traditional evaluation tools less effective, and perhaps, obsolete.
Even if you haven't incorporated AI tools into your teaching methodologies, your students are tackling coursework differently already.
In response to these changes, it is recommended that you:
Take for granted the proficiency of students in utilizing AI tools.
Provide explicit instructions regarding the appropriate use of AI tools.
This shift is most evident in take-home assignments. Assignments well-suited to the new AI context typically fall into one of these two categories:
1. AI-PROOF ASSIGNMENTS: The design and structure of these assignments are such that they inherently restrict the effectiveness of AI tools.
2. AI-INTEGRATED ASSIGNMENTS: These assignments are structured to include the use of AI as an integral part of their completion process.
However, there exists a third category of assignments, that we may call AI-INCOMPATIBLE ASSIGNMENTS. These are traditional assignments where the use of AI tools is prohibited by the professor, as this would undermine the intended purpose of the exercise.
It's important to recognize that currently no reliable proctoring tools can prevent or detect the use of AI by students, and it remains uncertain if there ever will be. This entials that accurate detection of AI use in AI-Incompatible assignments is not a possibility at this time, especially, if completed at home, as in-person proctoring is not an option in that case.
Consequently, faculty members are advised to reconsider and possibly re-structure these AI-Incompatible assignments into AI-proof or AI-integrated formats.
We are here to assist you in reorienting your assignments to align with your learning objectives! Please contact us at learning.innovation@ie.edu to schedule an office hours meeting on Zoom.
Designing assignments with AI is pretty simple. Remember as always that, the richer the information you give to the system, the better the results will be. The prompt on the left, for instance, details exhaustively all the different parameters ChatGPT-4 needs to consider in order to elaborate an assignment (right image).
AI is also great for introducing interactive activities in your courses! You only need to request the system to adopt a specific persona, depending on the activity you design. The following examples, provided by IE Professor Daniel Fernández-Kranz, show two complementary scenarios that you can incorporate into your teaching.
The first one is a ROLE-PLAY experiment, where AI is used to facilitate a stimulating debate showcasing contrasting viewpoints between Argentinian politicians Millei and Massa on economics.
The prompts below, on the other hand, illustrate AI's capacity to create SIMULATION GAMES with the participation of invented personae:
Prompt 1: Simulate the impact of your monetary policy decisions
Prompt 2: Debate with a workers’ union leader about the minimum wage
Traditionally, students have been required to complete class readings before attending sessions, where professors would then elaborate on key points and provide deeper insights. However, with the growing availability of AI tools that can help students synthesize and explain texts, a new approach is emerging—one that flips this conventional sequence with readings.
In this revised strategy, the professor first introduces and explains contents of the readings during class, setting a clear interpretative framework from the outset. This approach allows students to engage with the material with a better understanding of key themes and concepts before they begin their own reading.
Following the in-class discussion, students undertake the actual reading, potentially with the assistance of AI tools such as ChatGPT or Copilot. These tools can provide summaries, explanations, and contextual insights, helping students process the text more effectively.
To ensure comprehension and accountability, short quizzes can be introduced at the beginning of the following class to assess students’ grasp of the material. Additionally, these quizzes may serve as a useful complement to traditional roll calls that also encourage active study.
AI does not need to replace the act of reading but rather can supplement and enhance it, offering students additional support and expanding their engagement in meaningful ways.
As we explained in our Useful AI tools section Blackboard Ultra has introduced an AI system to facilitate generating rubrics (link here).
With AI making rubric creation more accessible, their role can now extend beyond outlining assessment criteria for students. One practical use is streamlining the feedback process. In Blackboard Ultra, once you have generated a rubric table, you can start by selecting the text from one of the cells and use it as a template. Then, with the support of your favorite AI platform, you can rephrase it in multiple ways in no time.
You can utilize these multiple versions as a departure point to create feedback for each student within the same performance range. The final step would be to refine each message to reflect individual strengths or areas for improvement.
AI is an effective instrument for developing grading rubrics tailored to the specific topic and characteristics of your course.
In this case, we provided ChatGPT-4 with a list of Learning Objectives for a course and asked the system to create a rubric based on them to assess a hypothetical class presentation.
Here is ChatGPT-4's response (shown in full in the video at the bottom):
It not only generated the rubric but also suggested a weighted grading scale for the presentation:
AI tools enable users to create lesson plans so you distribute evenly the contents of your session. This will also ensure you allocate time for for students' participation in a more controlled way.
The example shown in the video below we used the following prompt:
You are teaching an undergraduate course on International Relations divided into 15 sessions, each of them about 80 minutes long; session 4 is devoted to the European Union, its origins, development, structure and legal and institutional instruments;
Students have only an introductory knowledge of International Relations, mainly their history and I World War and II World War and early Cold War phases.
Create a lesson plan for Session 4; break down all the issues we should be discussing during that 80-minute session, including a few breaks for students participation and Q&A.
No sentences, only bullet points.
Image created by Dall·E
The rise of AI technologies has posed significant challenges to academic integrity.
As time has passed since the explosion of AI in late 2022, it has become increasingly evident that the development of foolproof AI detectors is becoming less likely in the near future. Even OpenAI had to shut down its own detector due to "low accuracy." This becomes even more challenging when human intervention further refines the content, rendering traditional detection methods almost obsolete.
However, our primary goal should not be to passively wait for new detection technologies to catch up with AI advancements. Instead, we should focus on two key objectives:
→ Cultivating educational values that highlight and promote the significance of academic honesty.
→ Designing assignments, projects, and exams that prioritize sustained effort throughout the learning process, rather than solely concentrating on the delivery of a final product (such as an essay, a paper, a project, and so on.)
While one straightforward approach is to restrict access to AI technology with the tools available at IE, such as SMOWL and RESPONDUS a more ambitious aim is to design assignments that inherently defy AI usage. Once accomplished, this would render the use of AI either inconsequential to the assignment's objectives or, if AI is permitted, ensure that its utilization enriches the educational experience of students, rather than impoverishing it.
Here are some valuable recommendations:
AI doesn't have to be utilized all the time, much like a delicious ingredient doesn't need to be present in every recipe. It's perfectly acceptable to limit its use in selected sessions and switch to 'offline mode' when advised.
Stay vigilant for inconsistencies in content, as chatbots may not be aware of the materials covered in your course. Additionally, AI responses often tend to sound somewhat mechanical and verbose, although this is gradually improving as technology advances.
Utilize exam review sessions to your advantage as a professor, you can make use of them too! In case of doubt, don't hesitate to call in a student to assess their ability to support the arguments presented in their work. It's a good practice to inform students in advance that this possibility exists.
If AI is to be incorporated proactively in your course, design cumulative work that allows you to track students' progress under your guidance. Additionally, consider formal presentations of findings to ensure that students truly absorb the knowledge, regardless of the sources they utilized along the way.
The 2025 AI Index Report, published by Stanford University's Institute for Human-Centered AI, has been released. This Report offers a comprehensive, data-driven look at the state of artificial intelligence across education, policy, research, and public sentiment worldwide. With a focus on availability of Computer Science programs and other forms of training, Chapter 7 (pp.365-394), which is devoted to education, analyzes how AI is growing -and where it still falls short- on a global context.
The survey concludes that building an equitable AI pipeline demands not only more courses, but clear standards, teacher training, infrastructure investments and ethics-plus-technical curricula from elementary schools through university: "Academic institutions around the world must continue to progress (and monitor their progress) on creating AI pathways, adopt policies to expand access to relevant courses, and implement strategies to upskill the educator workforce and engage students to participate and build competencies equitably."
Image created by Dall·E
As faculty members, it's paramount to recognize the potential risks associated with Artificial Intelligence when it comes to data privacy. This is important specifically due to one particularity of most AI systems: whenever you submit data, the information can be stored and resurface at a later point in a transformed or aggregated form, thus influencing the system's interactions with other unknown users.
For this reason, be always aware of the ethical risks posed by AI and adhere to privacy laws, including GDPR or FERPA. Ensure that any AI application you use is transparent in its data usage policies and complies with legal standards. And, crucially, refrain from including personal data about yourself, your students, or any confidential details about IE University’s structure and operations in your prompts.
Our IE Policies section provides you with IE University's policies about the use of AI and sensitive data. Make sure you review these carefully.
Promoting a culture of data responsibility and being vigilant about potential data misuse or breaches is essential at IE, and the only path to balance AI's benefits with the protection of the privacy rights of the institution, students and staff.
Want to collaborate?
Please fill in the form and share your experience using AI in class: