AIED 2025 workshop | Palermo (Italy), Hybrid | July 26 (Full day)
Dr. Guanliang Chen is a Senior Lecturer at the Centre for Learning Analytics, Monash University. He earned his Ph.D. in Computer Science from Delft University of Technology in the Netherlands. His research interests include Artificial Intelligence in Education, Natural Language Processing, and Learning Analytics. Dr. Chen specialises in developing and applying responsible AI technologies to support assessment and feedback at scale in educational contexts. He has published over 90 peer-reviewed papers and is widely recognised for his pioneering work on identifying and mitigating algorithmic bias in education. His contributions have earned him several accolades, including the Outstanding Paper Award at the 29th International Conference on Computational Linguistics and the Dean’s Award for Equity, Diversity, and Inclusion (Research) from Monash University’s Faculty of Information Technology in both 2021 and 2023. He serves on the editorial boards of two leading Q1 journals in technology-enhanced education—Computers & Education: Artificial Intelligence and the Journal of Learning Analytics. He has also contributed to the organisation of major international conferences, including AIED 2022, 2023, and 2024.
Dr. Simon Woodhead is Chief Data Scientist and co-founder of Eedi, an educational technology company focused on improving student outcomes by diagnosing misconceptions in mathematics. He holds a Ph.D. in Bayesian statistics from the University of Bristol and has over 20 years of experience at the intersection of education, data, and technology. Simon has led the development of several open datasets and education-focused data science competitions, including on platforms such as Kaggle. He co-leads a Learning Engineering Virtual Institute (LEVI) team, is part of a winning team from the Tools Competition, and is a research partner on the National Tutoring Observatory project. He co-organises the iRAISE workshop on Innovation and Responsibility in AI-Supported Education, hosted at AAAI in 2024 and 2025. Simon also hosts the Data Science in Education Meetup.
(Please note that the schedule might still be subject to change)
(08:00 - 09:00) Welcome and Registration
(09:00 - 09:30) Introduction
(09:30 - 10:30) Invited Speaker: Dr. Guanliang Chen | Slides |
(10:30 - 11:00) Coffee Break
(11:00 - 13:00) Hands-on Session
This year the workshop will also include a hands-on interactive session, to give participants the possibility of experimenting with modern AI techniques in the context of content evaluation. The session will be made of two parts, focusing on i) common metrics used in Natural Language Generation (NLG), and ii) traditional and AI-based techniques for the evaluation of exam items.
(13:00 - 14:00) Lunch Break
(14:00 - 15:00) Invited Speaker: Dr. Simon Woodhead | Slides |
(15:00 - 15:30) Online Session
(15:30 - 16:00) Coffee Break
(16:00 - 17:30) Poster Session
Leveraging AI Graders for Missing Score Imputation to Achieve Accurate Ability Estimation in Constructed-Response Tests. Masaki Uto and Yuma Ito. | ID:1 | PDF | Poster |
Domain-Adaptive Automated Essay Scoring with Topic Relevance Learning. Sungjin Nam. | ID:3 | PDF | Poster |
Automating pedagogical evaluation of LLM-based conversational agents. Zaki Pauzi, Michael Dodman and Manolis Mavrikis. | ID:4 | PDF | Poster |
Ordinality in Discrete-level Question Difficulty Estimation: Introducing Balanced DRPS and OrderedLogitNN. Arthur Thuy, Ekaterina Loginova and Dries Benoit. | ID:6 | PDF | Poster |
Open-Ended Questions Need Personalized Feedback: Analyzing LLM-Enabled Features with Student Data. Rachel Van Campenhout, Jeff Dittel, Bill Jerome, Michelle Clark and Benny Johnson. | ID:7 | PDF | Poster |
Enhancing Neural Automated Essay Scoring Accuracy by Removing Noisy Data Through Data Valuation. Takumi Shibata, Yuto Tomikawa, Yuki Ito and Masaki Uto. | ID:9 | PDF | Poster |
Fine-tuning for Better Few Shot Prompting: An Empirical Comparison for Short Answer Grading. Joel Walsh, Siddarth Mamidanna, Benjamin Nye, Mark G. Core and Daniel Auerbach. | ID:19 | PDF | Poster |
Comparing Human and LLM Evaluations on AI-Generated Critical Thinking Items: Implications for Valid Applications of Automatic Item Generation. Euigyum Kim, Salah Khalil and Hyo Jeong Shin. | ID:22 | PDF | Poster |
Leveraging the Intuitions of Lay People on Linguistic Complexity for Automatic Sentence Readability Assessment. Ignatios Charalampidis and Xiaobin Chen. | ID:23 | PDF | Poster |
Assessing learning materials: hybrid vs Large Language Model-based generation of grammar exercises. Lucas Poirot and Yannick Parmentier. | ID:8 | PDF | Poster |
More Brains: When Multi-Agent Systems Outperform Single-Agent Evaluation of Collaborative Math Tasks. Yu Wang, Madhumitha Gopalakrishnan, Ella Anghel and Yoav Bergner. | ID:15 | PDF | Poster |
(17:30 - 18:00) Closing Remarks
The workshop will be live-streamed via a Teams webinar. To join, follow the instructions below.
Go to: https://www.microsoft.com/microsoft-teams/join-a-meeting
Meeting ID: 364 056 530 668 0
Meeting password will be shared via the Whova app (in the workshop chat).