Suggested Readings
Explainable AI
Bradley, Cassidy, Dezhi Wu, Hengtao Tang, Ishu Singh, Katelyn Wydant, Brittany Capps, Karen Wong, Forest Agostinelli, Matthew Irvin, and Biplav Srivastava. "Explainable Artificial Intelligence (XAI) User Interface Design for Solving a Rubik’s Cube." In International Conference on Human-Computer Interaction, pp. 605-612. Springer, Cham, 2022. (link) (Cristina)
Riedl, René. "Is trust in artificial intelligence systems related to user personality? Review of empirical evidence and future research directions." Electronic Markets (2022): 1-31. (link) (Cristina)
Förster, M., Hühn, P., Klier, M., & Kluge, K. (2022). User-centric explainable AI: design and evaluation of an approach to generate coherent counterfactual explanations for structured data. Journal of Decision Systems, 1-32. (link) (Cristina)
Shulner-Tal, A., Kuflik, T., & Kliger, D. (2022). Enhancing Fairness Perception–Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic Decisions. International Journal of Human–Computer Interaction, 1-28. (link) (Cristina)
Ferguson, C., van den Broek, E. L., & van Oostendorp, H. (2022). AI-Induced guidance: Preserving the optimal Zone of Proximal Development. Computers and Education: Artificial Intelligence, 100089. (link) (Cristina)
Wu, M., Parbhoo, S., Hughes, M., Kindle, R., Celi, L., Zazzi, M., Roth, V., & Doshi-Velez, F. (2020). Regional Tree Regularization for Interpretability in Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6413–6421. (link) (Matteo)
Ross, A., & Doshi-Velez, F. (2018). Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). (link) (Matteo)
Mai, T., Khanna, R., Dodge, J., Irvine, J., Lam, K. H., Lin, Z., ... & Fern, A. (2020, March). Keeping it "organized and logical" after-action review for AI (AAR/AI). In Proceedings of the 25th International Conference on Intelligent User Interfaces (pp. 465-476). (link) (Cristina)
Atkinson, K., Bench-Capon, T., & Bollegala, D. (2020). Explanation in AI and law: Past, present and future. Artificial Intelligence, 103387. (link) (Cristina)
Tsai, C. H., & Brusilovsky, P. (2020). The effects of controllability and explainability in a social recommender system. User Modeling and User-Adapted Interaction, 1-37. (link) (Cristina)
Ausin, M. S., Maniktala, M., Barnes, T., & Chi, M. (2020, July). Exploring the Impact of Simple Explanations and Agency on Batch Deep Reinforcement Learning Induced Pedagogical Policies. In International Conference on Artificial Intelligence in Education (pp. 472-485). Springer, Cham. (link) (Cristina)
Yang, F., Huang, Z., Scholtz, J., & Arendt, D. L. (2020, March). How do visual explanations foster end users' appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces - IUI'2020 (pp. 189-201). (link) (Ben)
Carnell, S., Lok, B., James, M. T., & Su, J. K. (2019, March). Predicting Student Success in Communication Skills Learning Scenarios with Virtual Humans. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 436-440). ACM. (doi) (Cristina)
Langley, P. (2019). Explainable, Normative, and Justified Agency. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. Honolulu, HI: AAAI Press. (link) (+Presentation here) (Cristina)
Lim, B. Y., Yang, Q., Abdul, A., & Wang, D. (2019). Why these Explanations? Selecting Intelligibility Types for Explanation Goals. (link) (Cristina)
Lim, B. Y., Dey, A. K., & Avrahami, D. (2009, April). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119-2128). ACM. (doi) (Cristina)
Nagrecha, S., Dillon, J. Z., & Chawla, N. V. (2017, April). MOOC dropout prediction: lessons learned from making pipelines interpretable. In Proceedings of the 26th International Conference on World Wide Web Companion (pp. 351-359). International World Wide Web Conferences Steering Committee. (doi) (Cristina)
Schaffer, J., Playa Vista, C. A., O’Donovan, J., Michaelis, J., Adelphi, M. D., Raglin, A., & Höllerer, T. (2019, March). I can do better than your AI: expertise and explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 240-251). ACM. (doi) (Oswald)
Doshi-Velez F, Kim B. (arXiv pre-print Feb. 2017). Towards A Rigorous Science of Interpretable Machine Learning (Shane)
Wang, Danding, et al. "Designing Theory-Driven User-Centric Explainable AI." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI. Vol. 19. 2019. (Vanessa)
Visualization
Xin Qian, Ryan A. Rossi, Fan Du, Sungchul Kim, Eunyee Koh, Sana Malik, Tak Yeon Lee, and Joel Chan. 2021. Learning to Recommend Visualizations from Data. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD '21). Association for Computing Machinery, New York, NY, USA, 1359–1369. DOI:https://doi.org/10.1145/3447548.3467224 (Cristina)
Monadjemi, S., Garnett, R., & Ottley, A. (2020). Competing Models: Inferring Exploration Patterns and Information Relevance via Bayesian Model Selection. Viz 2020 (link) (Cristina)
Knittel, J., Lalama, A., Koch, S., & Ertl, T. (2020). Visual Neural Decomposition to Explain Multivariate Data Sets. Viz 2020 (link)(Oswald)
Choi, I. K., Childers, T., Raveendranath, N. K., Mishra, S., Harris, K., & Reda, K. (2019, May). Concept-Driven Visual Analytics: an Exploratory Study of Model-and Hypothesis-Based Reasoning with Visualizations. In Proceedings of the 37th annual ACM conference on human factors in computing systems. ACM. (link) (Cristina)
Lim, L., Dawson, S., Joksimovic, S., & Gašević, D. (2019, March). Exploring students' sensemaking of learning analytics dashboards: Does frame of reference make a difference?. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 250-259). ACM. (doi) (Cristina)
Silva, N., Blascheck, T., Jianu, R., Rodrigues, N., Weiskopf, D., Raubal, M., & Schreck, T. (2019, June). Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges. In ETRA 2019-Symposium on Eye Tracking Research and Applications. (link) (Cristina)
Yao Wang, Maurice Koch, Mihai Bâce, Daniel Weiskopf, and Andreas Bulling. 2022. Impact of Gaze Uncertainty on AOIs in Information Visualisations. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 60, 1–6. https://doi.org/10.1145/3517031.3531166 (Cristina)
Intelligent Tutoring Systems
Phung, Tung, Victor-Alexandru Pădurean, Anjali Singh, Christopher Brooks, José Cambronero, Sumit Gulwani, Adish Singla, and Gustavo Soares. "Automating human tutor-style programming feedback: Leveraging gpt-4 tutor model for hint generation and gpt-3.5 student model for hint validation." In Proceedings of the 14th Learning Analytics and Knowledge Conference, pp. 12-23. 2024. (link)
Szymanski, Maxwell, Jeroen Ooge, Robin De Croon, Vero Vanden Abeele, and Katrien Verbert. "Feedback, Control, or Explanations? Supporting Teachers With Steerable Distractor-Generating AI." In Proceedings of the 14th Learning Analytics and Knowledge Conference, pp. 690-700. 2024. (link)
Mejeh, Mathias, and Martin Rehm. "Taking adaptive learning in educational settings to the next level: leveraging natural language processing for improved personalization." Educational technology research and development (2024): 1-25. (link)
Jensen, E., Dale, M., Donnelly, P. J., Stone, C., Kelly, S., Godley, A., & D'Mello, S. K. (2020, April). Toward automated feedback on teacher discourse to enhance teacher learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). (link) (Nilay)
Okoye, K. (2019). A Systematic Review of Process Modelling Methods and its Application for Personalised Adaptive Learning Systems. Journal of International Technology and Information Management, 27(3), 23-46. (link) (Cristina)
Ruan et al. QuizBot: A Dialogue-based Adaptive Learning System for Factual Knowledge . CHI 2019 (link) (Sébastien)
Hutt, Grafsgaard and D'Mello. Time to Scale: Generalizable Affect Detection for Tens of Thousands of Students across an Entire School Year. CHI 2019 (link) (Sébastien)
Predicting Cognitive Information with Eye-Tracking Data
Arslanyilmaz, A., & Sullins, J. (2021). Eye-gaze data to measure students’ attention to and comprehension of computational thinking concepts. International Journal of Child-Computer Interaction, 100414. (Cristina)
Caitlin Mills, Julie Gregg, Robert Bixler & Sidney K. D’Mello (2021) Eye-Mind reader: an intelligent reading interface that promotes long-term comprehension by detecting and responding to mind wandering, Human–Computer Interaction, 36:4, 306-332, DOI: 10.1080/07370024.2020.1716762 (Nilay)
Pusiol G, Esteva A, Hall SS, Frank M, Milstein A, Fei-Fei L. Vision-based classification of developmental disorders using eye-movements. International Conference on Medical Image Computing and Computer-Assisted Intervention 2016 Oct 17 (pp. 317-325). (link) (Shane)
Fridman, L., Reimer, B., Mehler, B. and Freeman, W.T., 2018, April. Cognitive load estimation in the wild. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 652). ACM. (link) (Shane)
Somnath Arjun, Archana Hebbar, Sanjana, and Pradipta Biswas. 2022. VR Cognitive Load Dashboard for Flight Simulator. In 2022 Symposium on Eye Tracking Research and Applications (ETRA '22). Association for Computing Machinery, New York, NY, USA, Article 4, 1–4. https://doi.org/10.1145/3517031.3529777 (Cristina)
User-adaptive systems
Alghofaili et al. Lost in Style: Gaze-driven Adaptive Aid for VR Navigation. CHI 2019 (link) (Sébastien)
Arakawa. REsCUE: A framework for REal-time feedback on behavioral CUEs using multimodal anomaly detection. CHI 2019 (link) (Sébastien)
Ma et al. SmartEye: Assisting Instant Photo Taking via Integrating User Preference with Deep View Proposal Network. CHI 2019. (link) (Sébastien)
User Modelling
Tan, Zhaoxuan, and Meng Jiang. "User Modeling in the Era of Large Language Models: Current Research and Future Directions." arXiv preprint arXiv:2312.11518 (2023). (link) (Cristina)
AI & Society
Vinuesa, R., Sirmacek, B. Interpretable deep-learning models to help achieve the Sustainable Development Goals. Nat Mach Intell 3, 926 (2021). https://doi.org/10.1038/s42256-021-00414-y (Matteo)
Ferrari, F., Paladino, M. P., & Jetten, J. (2016). Blurring human–machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. International Journal of Social Robotics, 8(2), 287-302. (link) (Ben)
Meta Cognition and Learning
Naujoks, N., Harder, B. & Händel, M. Testing pays off twice: Potentials of practice tests and feedback regarding exam performance and judgment accuracy. Metacognition Learning (2022). https://doi.org/10.1007/s11409-022-09295-x (Cristina)