The FATE of AI in Education: Fairness, Accountability, Transparency, and Ethics

A Special Issue of the International Journal of Artificial Intelligence in Education (IJAIED)

This IJAIED special issue seeks to move conversations about the FATE of AI in Education (Fairness, Accountability, Transparency, and Ethics) forward and to situate the field of AIEd and EdTech within broader conversations around the ethical and societal implications of AI.

Background – Why a Special Issue on the FATE of AIEd?

Ethical considerations have received growing attention in both popular and academic spheres as AI-based systems increasingly influence every facet of our lives – including who receives a job, who is subjected to increased policing, and whose livelihoods are automated away (Holstein et al., 2019; Selbst, et al., 2019; Veale, et al., 2018). It is now commonplace to see news stories reporting harmful impacts of existing AI systems deployed in various real world, high-stakes contexts (e.g., Giang, 2018; Lohr, 2018).

Educational applications of AI are not immune to the risks observed in other domains, and recent years have seen a rise in attention to potential pitfalls of data-driven, AI-supported educational interventions. For example, teachers, students, and parents have protested the use of educational AI systems in classrooms across the US (e.g., Bowles, 2019; Herold, 2017; National Education Policy Center, 2018). Critics outside of the AI in Education (AIEd) community worry that the design of commercial systems may often be driven more by profit than by actual educational impact, and that potential risks (e.g., harmful algorithmic biases, student data privacy, or negative social and developmental impacts) may outweigh any benefits (Buckingham Shum, 2018; Bulger, 2016; Herold, 2017; Ito, 2017; Williamson, 2017).

Within this context, it is urgent that educational technology (EdTech) researchers engage with these discussions and provide positive exemplars of research, systems, and applications, to illustrate best practices in AI-supported education and EdTech more broadly (cf. Buckingham Shum, 2018). Indeed, our responses to these concerns as a research community may be critical in determining the fate of AI in education. Furthermore, our responses may help shape how the ethics of AI for human learning and development is defined and regulated more broadly.

A surge of recent research has focused on addressing ethical dimensions of AI systems' design, use, and regulation in other high-stakes contexts, such as automated hiring, healthcare, and recidivism prediction. For example, much work in the nascent FAT* community (ACM, 2018) has focused on understanding and mitigating harmful biases in data-driven AI systems and socio-technical systems more broadly. Research institutes and industry consortia such as AI Now and the Partnership on AI (Hern, 2016; Reisman et al., 2018) have been established to study the social implications of real world AI systems, and to formulate best practices for their development and use.

Yet despite this widespread attention, issues of Fairness, Accountability, Transparency, and Ethics (FATE) are rarely discussed within AIEd and neighboring EdTech research communities (Holmes et al., 2018; Holstein & Doroudi, 2019; Mayfield et al., 2019; Porayska-Pomsta & Rajendran, 2019). Recent work has emphasized that design requirements and methods for addressing the challenges related to the FATE of AI can be highly domain-dependent (Green & Hu, 2018; Holstein et al., 2019; Mayfield et al., 2019; Selbst et al., 2019). At the same time – despite growing concerns about AI’s growing influence over human decision-making and functioning – most debates and guidelines related to the FATE of AI do not deeply engage with issues of human cognition, learning, and development. Thus, to complement the kinds of domain-general investigations common in the broader FATE research literature, it is critical that our education research community explores what fairness, accountability, transparency, and ethics looks like in technology-supported education specifically. Such critical appraisal is key to enhancing our understanding of the FATE of AI more broadly, building on knowledge gleaned from years of AIEd and educational technology research.

Aims and Scope

This IJAIED special issue seeks to move conversations about the FATE (Fairness, Accountability, Transparency, and Ethics) of AI in Education forward and to situate the field of AIEd and EdTech within broader conversations around the ethical and societal implications of AI.

In this context, we invite researchers who work on AI-supported education and EdTech to directly engage with concerns about the present and future roles of AI in education.

The aim of the special issue is to develop a better understanding of how past and present AIEd and EdTech efforts can contribute to rigorous and forward thinking about and practices related to FATE dimensions of AI and data sciences more broadly. Educational technologies that authors may not conceptualize as "AI" per se are still within scope.

In this context, we seek contributions that:

  1. Situate AIEd and EdTech research to date in the context of FATE of AI and that explicate why knowledge generated through AIEd and EdTech research might affect how we interpret and operationalise FATE for education-oriented applications;
  2. Address how the definitions and concrete operationalisation of FATE dimensions can be informed by AIEd and EdTech research; explore and explicate the relationship between AI interventions, human cognition and FATE considerations and practices;
  3. Provide an evidence-based commentary on the future outlook and practices needed in relation to FATE of AIEd and EdTech more specifically, and/or on the role of AIEd and EdTech research in informing the creation of AI applications in other than education-oriented contexts.

Call for Submissions

Given the above, we seek the following types of papers:

  • Historical reflections on AIEd or EdTech research or a subarea of AIEd and EdTech research that provide concrete and evidence-based examples of how the AIEd and EdTech perspective can and should be reflected in the FATE of AI more broadly;
  • Theoretical frameworks explicating (i) the nature of FATE as dimensions of relevance to AIEd or EdTech, and/or (ii) how FATE are or can be instantiated in different AIEd or EdTech contexts and applications ;
  • Empirical work demonstrating how FATE is achieved in AIEd or EdTech applications;
  • Methodological contributions showing how FATE considerations are taken into account during the design, implementation and deployment of AIEd or EdTech applications;
  • Position papers that identify interdisciplinary research and practices of critical importance to FATE of AIEd or EdTech and that discuss how this research can or ought to be operationalised in AIEd or EdTech approaches.

The list of “types of papers” above is not intended to be exhaustive. If you have an idea for a paper that does not fall within one of the above categories, we would strongly encourage you to contact us to discuss the potential fit (email:

Topics of interest include but are not strictly limited to:

Bridging between AIEd / EdTech, and FATE

  • Understanding the nature of FATE (fairness, accountability, transparency and ethics) within AIEd or EdTech, as distinguished from FATE in AI more broadly
  • FATE of educational algorithms and modeling, including explainable systems
  • AIEd / EdTech, inclusion and equity (e.g., United Nations Sustainable Development Goal 4)
  • AIEd / EdTech data ethics (including privacy and data governance)
  • Technical robustness and safety in AIEd / EdTech systems
  • AIEd / EdTech and the law
  • Existing FATE principles in the context of AIED / EdTech, and the development of new principles

The FATE of AIEd & EdTech systems in real world contexts

  • Ethics and trust in AIEd / EdTech development and deployment
  • Human autonomy and agency in the context of AIEd / EdTech
  • Human oversight and social impact in AIEd / EdTech
  • AIEd / EdTech and human diversity, including identity, socio-economic, cultural, and neuro- diversity, with considerations to different forms of potential harm (e.g., harms of allocation and representation)
  • Work at the intersection of accessibility, AIEd / EdTech, and fairness

Deepening our understanding of the FATE of AIEd and EdTech

  • FATE of particular pedagogical features (e.g., the use of chatbots and agents)
  • FATE of learning and teaching interventions, including specific assumptions embedded in different types of AIEd / EdTech approaches
  • AIEd / EdTech and the ethics of education

The FATE of research in AIED

  • Ethics of AIEd and EdTech research and methods (e.g., the balance between research objectives and informed consents, such as the use of WOZ methods for ecological validity and informedness/deception of WOZ participants)

Deadline for submission of special issue paper:

Monday, April 20, 2020

Submissions to this special issue should follow the general guidelines for IJAIED submissions:

Special Issue Editors

Kaśka Porayska-Pomsta

Kaśka focuses on AIEd and social and educational inclusion through both theoretical and applied research. She works at the front line of education with children and adults with and without special needs in a variety of domains and educational settings. She leads and delivers courses on AIEd and data analytics to educational practitioners. She is co-organiser of the upcoming CIFAR-funded international workshop on Trust and AI, and is presently leading an effort at UCL Knowledge Lab to contextualise FATE of AI in a framework for AIEd.

Beverly Woolf

Beverly has more than 20 years experience in educational computer science research, production of intelligent tutoring systems and development of multimedia systems. She is the author of the 2009 book Building Intelligent Interactive Tutors and is the (co)author of over 150 technical papers. She has delivered tutorial, training programs and keynote addresses and served on panels in more than twenty countries.

Wayne Holmes

Wayne leads on AIEd for The Open University (UK). He has co-authored three books on AIEd, including Artificial Intelligence in Education. Promise and Implications for Teaching and Learning (Holmes et al., 2019). He led the first “Ethics of AIEd” workshop, at AIED 2018, and is also lead author of UNESCO’s forthcoming Policy Guidelines for Artificial Intelligence in Education.

Ken Holstein

Ken works with K-12 teachers and students to co-design more effective and desirable human–AI partnerships for the classroom. In a related line of research, he has investigated what industry practitioners need in order to improve fairness in AI systems used at scale (Holstein et al., 2019). He co-led the first "Fairness and Equity in Learning Analytics Systems" workshop at LAK '19 (Holstein & Doroudi, 2019), as well as a tutorial on incorporating algorithmic fairness into industry practice at FAT* '19 (Cramer et al., 2019).


ACM. 2018. ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*). Retrieved June 8, 2019, from

Aiken, R. M., & Epstein, R. G. (2000). Ethical Guidelines for AI in Education: Starting a Conversation. International Journal of Artificial Intelligence in Education (IJAIED), 11, 163–176.

Bowles, N. (2019). Silicon Valley Came to Kansas Schools. That Started a Rebellion. The New York Times. Retrieved June 8, 2019, from

Buckingham Shum, S. (2018). Transitioning Education's Knowledge Infrastructure. Keynote at the International Conference of the Learning Sciences (ICLS 2018).

Bulger, M. (2016). Personalized Learning: The Conversations We’re Not Having. Data and Society, 22.

Cramer, H., Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudík, M., Wallach, H., Reddy, S., & Garcia-Gathright, J. (2019). Challenges of Incorporating Algorithmic Fairness into Industry Practice. Tutorial at the ACM Conference on Fairness, Accountability, and Transparency (FAT* 2019). ACM.

Giang, V. (2018). The Potential Hidden Bias In Automated Hiring Systems. Fast Company. Retrieved June 8, 2019, from

Green, B. & Hu, L. (2018). The Myth in the Methodology: Towards a Recontextualization of Fairness in Machine Learning. In Proceedings of the International Conference on Machine Learning: The Debates Workshop.

Hern, A. (2016). “Partnership on AI” formed by Google, Facebook, Amazon, IBM and Microsoft. The Guardian. Retrieved June 8, 2019, from

Herold, B. (2017). The Case(s) Against Personalized Learning. Retrieve June 8, 2019, from

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial Intelligence in Education. Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign.

Holmes, W., Bektik, D., Whitelock, D., & Woolf, B. P. (2018). Ethics in AIED: Who Cares? (C. Penstein Rosé, R. Martínez-Maldonado, H. U. Hoppe, R. Luckin, M. Mavrikis, K. Porayska-Pomsta, … B. du Boulay, Eds.). In International Conference on Artificial Intelligence in Education (AIED 2018) (pp. 551–553).

Holstein, K., & Doroudi, S. (2019). Fairness and Equity in Learning Analytics Systems (FairLAK). In​ Companion Proceedings of the Ninth International Learning Analytics & Knowledge Conference (LAK 2019).

Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudík, M., Wallach, H. (2019). Improving Fairness in Machine Learning Systems: What do Industry Practitioners Need? In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI’19). ACM.

Ito, M. (2017). From Good Intentions to Real Shortcomings: An Edtech Reckoning. EdSurge. Retrieved June 8, 2019, from

Lohr, S. (2018). Facial Recognition Is Accurate, if You're a White Guy. The New York Times. Retrieved June 8, 2019, from

Mayfield, E., Madaio, M., Prabhumoye, S., Gerritsen, D., McLaughlin, B., Dixon-Román, E., & Black, A. W. (2019). Equity Beyond Bias in Language Technologies for Education. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications (pp. 444-460).

National Education Policy Center (2018). The Backlash Against Personalized Learning. Retrieved June 8, 2019 from

Partnership on AI. 2018. The Partnership on AI. Retrieved June 8, 2019, from

Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now Institute.

Porayska-Pomsta, K., & Rajendran, G. (2019). Accountability in Human and Artificial Intelligence Decision-Making as the Basis for Diversity and Educational Inclusion. In Artificial Intelligence and Inclusive Education (pp. 39-59). Springer, Singapore.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59-68). ACM.

Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and Accountability Design Needs for Algorithmic Support in High-stakes Public Sector Decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI' 18). ACM.

Williamson, B. (2017). Decoding ClassDojo: Psycho-policy, Social-emotional Learning and Persuasive Educational Technologies. Learning, Media and Technology, 42(4), 440-453.