Innovating assessment with GenAI, What should we focus on now?
Innovating assessment with GenAI, What should we focus on now?
Generative AI has been around for over two years. We've had the chance to play with it, use it in our own work, and experiment with integrating it into teaching and learning in higher education. Frankly, we’re now in the messy stage - which is completely fine. But how should we plan ahead? What does the future hold: total immersion? total avoidance? Or maybe a bit of both, depending on our goals?
Once again, insights from the science of learning can guide us toward better decisions.
Two years in, it’s clear that adapting academic assessment to the GenAI era is a major challenge. Innovation is needed, but where should we focus our efforts right now?
To think it through, I’ll draw on principles from the learning sciences, which always help clarify and guide decision-making:
According to the book How Learning Works (1), developing mastery in any field requires three elements:
1. Acquire component skills
2. Practice integrating skills
3. Know when and how to apply skills
Although originally meant for students’ learning, let's borrow the logic to think about our development as academic educators, specifically regarding building the skill of designing GenAI-integrated assessments.
So, where are we now, and what’s the next step in our mission to ensure quality learning? Let’s use the model to follow the stages:
1. The component skills are (or should be):
Designing effective, outcomes-aligned academic assessment
Using AI tools effectively within one’s discipline
2. Practicing the integration:
This is mostly the current phase: Many instructors are experimenting with GenAI in assessment in ways that range from ignoring it to sophisticated advanced integration. It’s a messy, necessary period driven by each instructor’s knowledge, ability, motivation, and classroom needs.
We’re learning through trial and error, but to move forward, we need to ask:
3. When and where will these skills be applied?
As long as each instructor decides how to use AI within broad institutional guidelines,it’s difficult to align and coordinate goals and means to achieve them. Views differ on what matters most: basic skills? AI fluency? content? application? In addition, implementation is largely random and uncoordinated.
This reflects where we are in the change process, but it’s time to move on:
The next step in AI-era assessment should take place at the program level, not just in individual courses. This may include:
Designing new assessment strategies that combine:
Course-level assessments without AI to ensure mastery of essential knowledge and skills.
Cross-course assessments with AI as needed to evaluate higher-level abilities, ideally at the end of the year. These could include collaborative projects, practical exams, or presentations to panels. Our innovation efforts should focus on meaningful, valid, integrative assessment while also considering instructor focus and workload.
The emergence of AI brings significant challenges and new opportunities. Our foremost goal should be the quality of student preparation, and our efforts in assessment innovation should aim to ensure capable graduates. While today’s scattered efforts are valuable, we must keep our eyes on well-defined and meaningful goals.
Lovett, M. C., Bridges, M. W., DiPietro, M., Ambrose, S. A., & Norman, M. K. (2023). How learning works: Eight research-based principles for smart teaching. John Wiley & Sons.
Published: April 2025