Epistemic Injustice in and through AI
Epistemic Injustice in and through AI
Deadline for Expression of Interest : 15th November 2025 23:59 (Anywhere on Earth)
Notification of Acceptance : 25th November 2025
Workshop Date: 30th November 2025 09:30 am - 03:30 pm (AEST)
This AI-generated image illustrates how machine learning models often produce distorted, artefact-ridden outputs that lack intentionality and draw from the labour of human creators without credit (Adobe Firefly prompt, 24.09.2025: ‘epistemic injustice in and through ai’)
AI is rapidly transforming knowledge production and practices across a range of domains, yet AI technologies often embed and perpetuate epistemic injustices—privileging dominant perspectives while marginalising others.
We recognise that the rise of AI systems has reignited long-standing questions about who gets to author knowledge, whose voices are recognised, and how technology design contributes to the reproduction of global power hierarchies. Critical scholars across philosophy and postcolonial studies argue that knowledge is not neutral. This can be framed through the lens of epistemic injustice, where marginalised groups are denied credibility or interpretive tools.
This workshop invites contributions that explore epistemic injustice in and through AI, examining how AI systems can marginalise knowledge, culture, and access. Participants will engage in critical discussions about how these technologies contribute to the inequitable distribution of knowledge and authority, not only in their outputs but also through their design, development, deployment and governance.
This workshop is organised as part of OzCHI conference in Sydney, Australia (in person).
The workshop will move between hands-on experimentation, reflective discussion, and collaborative mapping. Conversations will continually return to the question of equitable knowledge infrastructures in technical systems, innovation, and research practices.
09.30 – 09.45 Welcome and introduction
09.45 – 10.30 Presentations
10.30 – 11.30 Mapping and group discussion
11.30 – 12.30 Activity 1: Creative experimentation with generative AI
12.30 – 13.30 Lunch
13.30 – 14.15 Activity 2: Critical examination of AI
14:15 – 15:15 Discussion and co-creating a call to action
15:15 – 15.30 Closing
This workshop invites participants to explore the entangled dynamics of epistemic injustice, computing, and AI through collaborative and experiential inquiry and discussion. We suggest six themes in contemporary work on AI applications that will prompt a discussion of scholarship with significant societal and therefore justice implications:
Generative AI’s outputs often reproduce epistemic injustices — both testimonial, by silencing or misrepresenting marginalised voices, and hermeneutical, by rendering certain lived experiences unintelligible.
AI systems have the potential to support and transform creative practice across writing, design, music, visual arts, and many more artistic fields, and they also raise pressing concerns around epistemic injustice.
Proponents of AI claim it can eliminate human biases in decision making across healthcare services. However, many AI systems instead amplify existing human biases, mirroring their designers’ worldviews, as well as limitations in datasets used to build and train AI on leading to societal consequences and injustices.
AI systems rely upon behind-the-scenes and invisible work, which becomes obscured as such labour is ignored, marginalised, taken for granted, or rendered out of sight. The epistemic injustice of labour hierarchies extends into AI products themselves, where hegemonic meaning-making and representation become embedded in technological design of AI “workers.”
In classrooms around the world, AI is no longer just a subject of speculation, it is becoming a participant. From personalised tutoring bots to AI-generated student feedback, large language models (LLMs) are reshaping higher education. Although LLMs can support the goals of Education 4.0, they also risk epistemic injustice, from undermining educator roles to reinforcing structural bias.
Much of the literature on what we are calling AI and Automated Decision- Making (ADM) foregrounds technical objects. Yet, in framing the issue through a certain technology or technical problem, these accounts tend to make the effects of such technologies appear inevitable, as though they were hardwired into the technical assemblage itself.
Want to read more? Download the full workshop proposal here
To participate in the workshop, please submit an expression of interest. Once assessed by workshop organisers and accepted, you will need to register through OzCHI conference website. We seek submissions reflecting on experiences, research, pedagogy, or design work that address how AI perpetuates epistemic injustice, or offer strategies and practices for resisting and mitigating these harms. Submissions may take the form of a 500-word abstract and can include case studies or reflections from fields such as HCI, AI ethics, education, design, creative practice, digital culture, justice, and policy. We welcome diverse perspectives from researchers, practitioners, educators, and artists to critically engage with how AI is reshaping knowledge systems and how we might contribute to creating more inclusive, equitable, and just AI technologies.
Please submit your expression of interest through the form below:
For all queries and information please contact Diana Chamma (diana.chamma@sydney.edu.au)
Diana Chamma
University of Sydney
Naseem Ahmadpour
University of Sydney
Syed Ishtiaque Ahmed
University of Toronto
Nusrat Jahan Mim
University of Toronto
Wendy Qi Zhang
University of Sydney
Katherine di Bona
University of Sydney
Thida Sachathep
University of Sydney
Heather Horst
University of Sydney
Jenna Imad Harb
The Australian National University
Citation: Chamma, D., Ahmadpour, N., Ahmed, S. I., Mim, N. J., Zhang, W. Q., di Bona, K., Sachathep, T., Horst, H., & Harb, J. I. (2025). Epistemic injustice in and through AI. In Proceedings of the 2025 ACM Conference on Human-Computer Interaction (OzCHI '25). Association for Computing Machinery. https://doi.org/10.1145/3764687.3767279