In this project, we argue that the dangers from AI require more than regulations by governments. Every sector of the economy and society is or will be affected by AI. Social institutions must harness the opportunities AI offers for human progress while mitigating or preventing the dangers the technology poses. We envision a set of actions across the public and private spheres of this nation in coordination with initiatives from other countries. Specifically, we discuss the unique position of higher education institutions in the effort to abate potential harms.
We propose several postsecondary educational strategies, which we subsume under the label HEART AI (Human-centric, Ethically Aligned, Reliable, and Trustworthy AI). The strategies will technically educate, socialize, and provide ethical AI literacy to undergraduate and graduate students who will be the next generation of AI users and developers.
Higher education is a critical public sector institution whose contributions to addressing and/or mitigating the current and future challenges of AI can be instrumental to the struggle against resurgent authoritarianism. We propose that universities develop educational programs and curricula that increase students’ capacity to become AI literate across the life course, with different objectives and strategies for different postsecondary student populations. Common to all university responses must be educational curricula that increase both the diversity and the HEART AI capacities of the population that engages with AI— including users, developers, and disseminators (Maher and Tadimalla, 2022).
The essential goal of a comprehensive curriculum for AI Literacy is that all students develop critical thinking skills to assess the benefits and limitations of using AI based on a foundation of AI technical literacy contextualized by the tenets of HEART AI. The table (Right) presents an elaboration of the components of HEART AI. A comprehensive curriculum in AI Literacy ensures that all students acquire the intellectual and technical tools to critically analyze the potential benefits, as well as assess the tool’s limitations, including AI’s potential use by neo-authoritarians and other bad actors.
Sources: Bricout, et al., 2022; Evans, et al., 2021; Hacker, 2018; Houser, 2019; Mantelero, 2018; Sigfrids, et al., 2023; Taeihagh, 2021).
The development of AI systems necessitates a focus on ensuring its safety to combat data poisoning, algorithmic unfairness, unexplainable machine learning, gender, race/ethnic, and other biases (ETO, 2024). These efforts are consistent with HEART AI.
Trustworthiness refers to the embedding of ethical principles and human-centered values in the development and dissemination of AI systems, ensuring transparency, fairness, and accountability throughout the system’s life cycle.
Reliability emphasizes the AI tool’s consistent and accurate performance without significant errors or biases.
Human-centric AI goes beyond reliability and trustworthiness. It prioritizes the augmentation of human capabilities and the enhancement of human well-being.
This approach considers human values; cultural variability; diversity among lived experiences, identities, and interests; individuals’ complex identities (intersectionalities); and aims to create an interdisciplinary, collaborative, and responsible interaction between humans and machines.
Achieving HEART AI systems requires a comprehensive understanding of the ethical, technical, and societal dimensions of AI development and deployment.
The proposed graduate program provides interdisciplinary training to a demographically diverse cohort of computer science, engineering, humanities, and social science students involved in the development and use of AI systems. The model focuses on three intertwined core issues in the development of future AI systems: Human-Centric, Ethical, and Reliable. The themes are shown in Figure (above).
Graduate education has a potentially pivotal role in any response to challenges posed by AI since graduate education is a gateway to leadership positions. If AI is to be a trustworthy, reliable human-centric technology used for societal good —rather than a means to nefarious or authoritarian ends— those who create, promote, and implement AI technologies require an education that prepares them for and launches them into the service of human-centered values and goals.
The authors and the CHAIS Center colleagues have created a model for training a demographically diverse interdisciplinary cadre of graduate students in HEART AI development, implementation, and dissemination (Maher, et al., 2022; Mickelson et al., 2023).
Our short-term objective is to prepare the next generation of computer scientists, engineers, social scientists, and humanity scholars to lead in creating and promoting human-centered AI that is reliable and trustworthy.
Our long-term objectives are to contribute to the research in the alignment of AI with human-centered values and to mitigate resurgent authoritarianism. Among its many strengths, the HEART AI model gives previously marginalized individuals a voice in the development and use of the tools of AI.
Our proposed model of a post-graduate AI training program has two levels: the PhD Program and the Graduate Certificate. Both levels of the training program's structural elements will consist of a set of multidisciplinary courses, a HEART AI systems studio course, a seminar series, and internship programs. Because AI can become trustworthy only as the result of the convergence of technical and social scientific/humanistic research programs, the interdisciplinary backgrounds of students is a strength. The inclusion of demographically diverse students across the disciplines is another core strength of the HEART AI model of graduate training.
The faculty and students who participate in the proposed program will be associated with existing graduate programs in engineering, computing, and liberal arts and sciences. Students will complete core courses in multiple departments, an innovative interdisciplinary HEART AI systems studio, as well as participate in a seminar series each year of the program to enhance their cross-disciplinary and professional training. Topics to be integrated into existing engineering, computer science, and liberal arts and sciences graduate curriculum include human-centered AI, ethical AI, and robust/reliable AI. This training extends existing curricula in engineering and computing that lack the integration of social scientific, ethical, and humanistic content with technical skills. The extended curriculum will include philosophical and social scientific perspectives on human-AI systems. This new curriculum will significantly promote the interdisciplinary and professional training of graduate students with a focus on technological partnerships based on models of trust and ethics.
Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896.
All students will be expected to develop an understanding of central concepts associated with HEART AI, such as “fairness”, and how they are deployed in different technical contexts (Mulligan et al., 2019). Rather than assuming that all students are required to have deep technical knowledge of AI models, the studio course and consortium approach means that students will bring their disciplinary expertise in areas such as policy formation, AI, data science, or structural inequities to their research projects. This will enable students to incorporate central HEART AI concepts in their work. The curriculum’s interdisciplinary and collaborative nature also prepares students for the workplace, where people from technical disciplines will work alongside those with backgrounds in law, ethics, and the social sciences.
Cite this work: Mickelson, R. A ., Maher, M. L., Hull, G., & Tadimalla, S. Y. (2025). Human-centric, Ethically Aligned, Reliable, and Trustworthy Artificial Intelligence Education for Twenty-First Century Economy and Society. In The Oxford Handbook of the 21st Century Economy and Society, Kevin T. Leicht, Editor. Oxford University Press, forthcoming.
Maher, M. L., & Tadimalla, S. Y. (2024, May). Increasing Diversity in Lifelong AI Education: Workshop Report. In Proceedings of the AAAI Symposium Series (Vol. 3, No. 1, pp. 493-500).
Emerging Technology Observatory (ETO). (2024). The state of global AI safety research. Substack, April 3. Center for Security and Emerging Technology, Georgetown University https://eto.tech/blog/state-of-global-ai-safety-research/
Mickelson, R.A. Maher, M., D., Hull, G., Dou, W., Fan, L., & Tadimalla, Y. (2023, June 29). Resisting Resurgent Authoritarianism With Trustworthy and Human-Centric AI. Paper presented at the International Sociology Association, Melbourne, AU.
S. Y. Tadimalla and M. L. Maher, "AI Literacy for All: Adjustable Interdisciplinary Socio-technical Curriculum," 2024 IEEE Frontiers in Education Conference (FIE), Washington, DC, USA, 2024, pp. 1-9, doi: 10.1109/FIE61694.2024.10893159.
Mulligan, D. K., Kroll, J. A., Kohli, N., & Wong, R. Y. (2019). This thing called fairness: Disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-36.
Bricout, J., Greer, J., Fields, N., Xu, L., Tamplain, P., Doelling, K., & Sharma, B. (2022). The “humane in the loop”: Inclusive research design and policy approaches to foster capacity building assistive technologies in the COVID-19 era. Assistive Technology, 34(6), 644-652.
Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., ... & Saunders, W. (2021). Truthful AI: Developing and governing AI that does not lie. arXiv preprint arXiv:2110.06674.
Hacker, P. (2018). Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common market law review, 55(4).
Houser, K. A. (2019). Can AI solve the diversity problem in the tech industry: Mitigating noise and bias in employment decision-making. Stan. Tech. L. Rev., 22, 290.
Mantelero, A. (2018). AI and Big Data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754-772.
Sigfrids, A., Leikas, J., Salo-Pöntinen, H., & Koskimies, E. (2023). Human-centricity in AI governance: A systemic approach. Frontiers in artificial intelligence, 6, 976887.
Taeihagh, A. (2021). Governance of artificial intelligence. Policy and society, 40(2), 137-157.