International Workshop on Fairness and Equity in Learning Analytics Systems (FairLAK)

In conjunction with LAK 2019 at Arizona State University

A workshop for researchers and practitioners to learn about and share:

    • Ways learning analytics systems might amplify, reduce, or produce social inequities
    • Strategies for monitoring and mitigating undesirable biases in learning analytics systems
    • Opportunities to improve fairness in real-world learning analytics systems through researcher-practitioner partnerships

However, we encourage members of the broader LAK community to participate in this workshop, with or without a paper submission. There will be many opportunities for you to make valuable contributions during the workshop itself!

Important dates

December 3 - Deadline for submissions

December 18 - Extended deadline for submissions!

January 4 - Notifications sent out

February 5 - Camera-ready deadline

March 5 - FairLAK Workshop

Workshop information and objectives

Type of event: Half-day workshop

Date/Time: Tuesday, March 5 (9am - 12:30pm), in conjunction with LAK 2019 at Arizona State University

Mixed participation: both participants with a paper submission and other members of the LAK community are encouraged to participate!

The primary goals of this workshop are as follows:

  • Cross-disciplinary translation: Introduce LAK researchers and practitioners to the state-of-the-art in fairness and bias in data-driven algorithmic systems.
  • A venue to share work related to fairness in learning analytics: Share in-progress research/design work or on-the-ground experiences related to fairness and equity in learning analytics systems (e.g., algorithmic bias).
  • Developing a research agenda: Collaboratively develop a research agenda for more equitable learning analytics, based on open problems and directions identified by workshop participants.
  • Researcher and practitioner matchmaking: Identify opportunities for fruitful researcher-practitioner and researcher-researcher collaborations.

Tentative workshop schedule

A. Introductions and background (45 minutes):

Participants will share their personal objectives for the workshop. Then the organizers will provide a rapid overview of existing work on fairness in data-driven algorithmic systems. Researchers and practitioners will learn about existing, state-of-the-art methods to audit real-world learning analytics systems for potentially harmful biases, and strategies/methods to mitigate such biases.

B. Presentations of accepted contributions (60 minutes)

C. Collaborative group work (60 minutes)

    • Problem-finding (20 minutes, small-group discussions): Participants will identify pressing open issues around fairness and equity in learning analytics systems, collecting issues on sticky notes in small-group discussions
    • Sharing open problems and envisioning possible solutions (20 minutes, whole-group discussion): Groups will share the issues they have identified, synthesizing issues through affinity diagramming
    • Turning ‘possible solutions’ into directions for future research (20 minutes, small-group discussions): Groups will gather around particular areas of the growing affinity diagram (based on areas of interest), to discuss specific issues that interest them in greater detail – this time generating ideas for possible projects

D. Synthesis, speed dating, and next steps (45 minutes)

    • Developing a shared research agenda for Fair Learning Analytics (25 minutes, whole-group discussion): Based on the activities above, the organizers will work with participants to synthesize their ideas into a shared research agenda (i.e., a call to action for the LAK community, consisting of several concrete research and design directions)
    • Speed Dating and Closing Notes (20 minutes): Researchers and practitioners will circulate throughout the room, engaging in brief (timed) conversations with others to begin exploring concrete opportunities for collaboration

Organizers

Kenneth is currently a doctoral researcher in Human-Computer Interaction at Carnegie Mellon University. His research focuses on the co-design and evaluation of AI systems that augment and amplify K-12 teachers' abilities, instead of trying to replace them. He has also done research investigating industry practitioners' current practices, challenges, and needs for support in improving fairness in AI products used at scale. He has previously presented tutorials and symposia at SIGCSE and ICLS.

Shayan is currently a doctoral researcher in Computer Science at Carnegie Mellon University and a Visiting Student Researcher at Stanford University. His research is in the educational data sciences, educational technology, and learning sciences, with a particular focus on the prospects and limitations of computational models of student learning. One of his interests is studying the equitability of student models learned from data and finding ways to design more equitable student models.