4. Student-Staff Co-Creation of Politics of Artificial Intelligence Module Content, Seminar Activities and Assessment 

Authors:  Kerem Öge and Leonardo Pignatari Aboutboul

Institution: University of Warwick, UK



Situation 

In March 2023, the first author proposed the module "Politics of Artificial Intelligence" to be taught at the Department of Politics and International Studies (PAIS) at the University of Warwick. Following its approval, the author and the student co-author initiated a co-creation project to develop accessible learning materials and integrate AI into seminar activities and assessments for this module.



Task 

We were keen on exploring how AI can enhance inclusivity and innovation in assessments and seminar delivery in the social sciences (Rudolph et al., 2023). Our aim was to survey the literature to identify key aspects of the relationship between AI and Politics across various issue areas and then translate this knowledge into AI-based learning activities. Co-creation is an excellent method to achieve this as it actively involves students in the research and development process and fosters ownership of their learning journey (Bovill, 2020; Doyle et al., 2021; Mercer-Mapstone et al., 2017; Mercer-Mapstone and Bovill, 2020).



Action 

To achieve our aims, we first identified key issue areas where politics and AI intersect such as ethics, democracy, security, and automation. Students in the project contributed to crafting learning outcomes, lecture notes, and accessible seminar questions on each topic. This collaborative approach allowed us to create a module structure that emphasises interdisciplinarity and inclusivity (Mercer-Mapstone and Bovill, 2020). 

Next, we concentrated on transforming this knowledge into innovative seminar activities and assessments. For instance, for the democracy theme, we designed a seminar activity where students would participate in a simulated political campaign using ChatGPT to interact with constituents. For the security theme, we planned interviews with various stakeholders (impersonated by ChatGPT) on the use of AI in security forces (see Figure 1). 

Figure 1 - Co-creation with ChatGPT in practice: the lecturer and student author prepare the task

Figure 1 - Co-creation with ChatGPT in practice: the lecturer and student author prepare the task

Designing an AI-based assessment posed greater challenges compared to seminar activities, particularly due to the higher stakes involved in marking and evaluation. In response, the student co-author suggested a scaled-down assessment, accounting for 20% of the total mark, in the form of a reflective presentation. In this assessment, students engage in a debate with ChatGPT on the role of AI in society before presenting their reflections in class. This assessment evaluates critical thinking, communication, analytical skills, and persuasion, as well as proficiency in utilising AI effectively. Importantly, mandating the use of the free version ChatGPT in this assignment ensures equal opportunity for all students.



Results 

The module will run in the 2024-2025 academic year, but we already applied our teaching innovation at a workshop, observing several potential positive outcomes for students and the wider higher education sector. 

During the workshop, students found great value in the hands-on experience of running a simulated political campaign, an opportunity that would have been impossible in a real-world setting. Embracing diverse political ideologies, they created political posters and crafted speeches, slogans, and social media posts utilising ChatGPT and DALL·E. 

For the next task, students in the workshop conducted brief interviews with stakeholders on the impact of AI on security issues. These stakeholders included soldiers, politicians, tech companies, and civilians, impersonated by ChatGPT. This exercise not only improved students’ qualitative research skills, but also prompted them to critically assess the ethical, legal, and societal implications of AI within security practices. 

For the assessment, we instructed ChatGPT to adopt a technological pessimist perspective, initiating a 10-minute debate with students who then reported their discussions afterward. Students were deeply engaged and highlighted the innovative nature of this assessment approach. Their reports included critical evaluations of ChatGPT's ability to process and respond to counter-arguments.

We expect these outcomes to extend beyond the immediate workshop participants, potentially shaping broader teaching practices at PAIS following the module's launch in 2024-2025. The integration of AI into teaching methodologies has the potential to significantly impact social sciences. This innovation could contribute to transforming teaching practices across the sector, enhancing student satisfaction, and enabling a more dynamic and effective educational experience.



Stakeholder Commentary 

Students and staff in PAIS provided valuable feedback on our teaching innovation. Students found the activities engaging, fostering not only a deeper comprehension of specific themes, but also an increased awareness of GenAI and its biases. Students also recommended that they should be allowed to generate their own prompts when interacting with ChatGPT to get better acquainted with GenAI tools. For the assessment, students requested greater clarity regarding the scope of the debate and the marking criteria. This underscores the significance of assessment literacy in AI-based activities. We also presented our findings to the AI Working Group at PAIS, who welcomed the innovation and recommended refining the marking criteria for AI usage. 

Overall, by incorporating GenAI into seminar activities and assessments through student-staff co-creation, we were able to expose students to the capabilities and limitations of AI in real-world contexts. This experience increases student engagement, enhances their understanding of AI's potential applications, and facilitates critical thinking about its role in society.

References

Bovill, C. (2020) Co-creation in learning and teaching: the case for a whole-class approach in higher education. Higher education, 79, 1023-1037.

Doyle, E., Buckley, P. & McCarthy, B. (2021) The impact of content co-creation on academic achievement. Assessment & Evaluation in Higher Education, 46, 494-507.

Mercer-Mapstone, L. & Bovill, C. (2020) Equity and diversity in institutional approaches to student–staff partnership schemes in higher education. Studies in Higher Education, 45, 2541-2557.

Mercer-Mapstone, L., Dvorakova, S. L., Matthews, K. E., Abbot, S., Cheng, B., Felten, P., Knorr, K., Marquis, E., Shammas, R. & Swaim, K. (2017) A systematic literature review of students as partners in higher education. International Journal for Students as Partners, 1, 1-23.

Rudolf, J., Tan, S. & Tan, S. (2023) ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of applied learning and teaching, 6, 342-363.


Author biographies


Kerem Öge

Kerem Öge is a Teaching Fellow in Politics and International Studies at the University of Warwick, teaching the politics of AI. His research focuses on political debates surrounding the social legitimacy of facial recognition technology.


Leonardo Pignatari Aboutboul 

Leonardo Pignatari Aboutboul is a 3rd year Philosophy, Politics & Economics student at the University of Warwick. He believes AI has far-reaching transformative potential for higher education, and should be embraced.