International Workshop on Fairness and Equity in Learning Analytics Systems (FairLAK)

In conjunction with LAK 2019 at Arizona State University

A workshop for researchers and practitioners to learn about and share:

    • Ways learning analytics systems might amplify, reduce, or produce social inequities
    • Strategies for monitoring and mitigating undesirable biases in learning analytics systems
    • Opportunities to improve fairness in real-world learning analytics systems through researcher-practitioner partnerships

Workshop information and objectives

Type of event: Half-day workshop

Date/Time: Tuesday, March 5 (9am - 12:30pm), in conjunction with LAK 2019 at Arizona State University

Mixed participation: both participants with a paper submission and other members of the LAK community are encouraged to participate!

The primary goals of this workshop are as follows:

  • Cross-disciplinary translation: Introduce LAK researchers and practitioners to the state-of-the-art in fairness and bias in data-driven algorithmic systems.
  • A venue to share work related to fairness in learning analytics: Share in-progress research/design work or on-the-ground experiences related to fairness and equity in learning analytics systems (e.g., algorithmic bias).
  • Developing a research agenda: Collaboratively develop a research agenda for more equitable learning analytics, based on open problems and directions identified by workshop participants.
  • Researcher and practitioner matchmaking: Identify opportunities for fruitful researcher-practitioner and researcher-researcher collaborations.

Tentative workshop schedule

A. Introductions and background (30 minutes):

Participants will share their personal objectives for the workshop. Then the organizers will provide a rapid overview of existing work on fairness in data-driven algorithmic systems. Researchers and practitioners will learn about existing, state-of-the-art methods to audit real-world learning analytics systems for potentially harmful biases, and strategies/methods to mitigate such biases.

B. Presentations of accepted and invited talks (80 minutes total – 8 minutes per talk, with 5 minutes for questions and discussion – presenter names are bolded)

1. Josh Gardner, Christopher Brooks and Ryan Baker: Evaluating the Fairness of Predictive Student Models through Slicing Analysis.

2. Bodong Chen and Haiyi Zhu: Towards Value-sensitive Learning Analytics Design.

3. Kyle Jones and Chase McCoy: Ethics in Praxis: Socio-Technical Integration Research in Learning Analytics.

4. Michael Meaney and Tom Fikes: Early-adopter Iteration Bias and Research-praxis Bias in the Learning Analytics Ecosystem.

5. David Lang: Strategic Omission and Risk Aversion: A Bias-Reliability Tradeoff.

6. Shayan Doroudi and Emma Brunskill: Fairer but Not Fair Enough: On the Equitability of Knowledge Tracing.

C. Collaborative group work (60 minutes)

    • Problem-finding (20 minutes, small-group discussions): Participants will identify pressing open issues around fairness and equity in learning analytics systems, collecting issues on sticky notes in small-group discussions
    • Sharing open problems and envisioning possible solutions (20 minutes, whole-group discussion): Groups will share the issues they have identified, synthesizing issues through affinity diagramming
    • Turning ‘possible solutions’ into directions for future research (20 minutes, small-group discussions): Groups will gather around particular areas of the growing affinity diagram (based on areas of interest), to discuss specific issues that interest them in greater detail – this time generating ideas for possible projects

D. Synthesis, speed dating, and next steps (40 minutes)

    • Developing a shared research agenda for Fair Learning Analytics (20 minutes, whole-group discussion): Based on the activities above, the organizers will work with participants to synthesize their ideas into a shared research agenda (i.e., a call to action for the LAK community, consisting of several concrete research and design directions)
    • Speed Dating and Closing Notes (20 minutes): Researchers and practitioners will circulate throughout the room, engaging in brief (timed) conversations with others to begin exploring concrete opportunities for collaboration


Kenneth is currently a doctoral researcher in Human-Computer Interaction at Carnegie Mellon University. His research focuses on the co-design and evaluation of AI systems that augment and amplify K-12 teachers' abilities, instead of necessarily trying to replace these abilities. He has also done research investigating industry practitioners' current practices, challenges, and needs for support in improving fairness in AI products used at scale. He has previously presented tutorials and symposia at SIGCSE and ICLS.

Shayan is currently a doctoral researcher in Computer Science at Carnegie Mellon University and a Visiting Student Researcher at Stanford University. His research is in the educational data sciences, educational technology, and learning sciences, with a particular focus on the prospects and limitations of computational models of student learning. One of his interests is studying the equitability of student models learned from data and finding ways to design more equitable student models.