International Workshop on Fairness and Equity in Learning Analytics Systems (FairLAK)
In conjunction with LAK 2019 at Arizona State University
In conjunction with LAK 2019 at Arizona State University
The FairLAK workshop proceedings are now available here!
Slide decks from workshop presentations:
0. Ken Holstein and Shayan Doroudi: Workshop intro slides
2. Bodong Chen and Haiyi Zhu: Towards Value-sensitive Learning Analytics Design.
5. David Lang: Strategic Omission and Risk Aversion: A Bias-Reliability Tradeoff.
6. Shayan Doroudi and Emma Brunskill: Fairer but Not Fair Enough: On the Equitability of Knowledge Tracing.
Type of event: Half-day workshop
Date/Time: Tuesday, March 5 (9am - 12:30pm), in conjunction with LAK 2019 at Arizona State University
Mixed participation: both participants with a paper submission and other members of the LAK community are encouraged to participate!
Participants will share their personal objectives for the workshop. Then the organizers will provide a rapid overview of existing work on fairness in data-driven algorithmic systems. Researchers and practitioners will learn about existing, state-of-the-art methods to audit real-world learning analytics systems for potentially harmful biases, and strategies/methods to mitigate such biases.
1. Josh Gardner, Christopher Brooks and Ryan Baker: Evaluating the Fairness of Predictive Student Models through Slicing Analysis.
2. Bodong Chen and Haiyi Zhu: Towards Value-sensitive Learning Analytics Design.
3. Kyle Jones and Chase McCoy: Ethics in Praxis: Socio-Technical Integration Research in Learning Analytics.
4. Michael Meaney and Tom Fikes: Early-adopter Iteration Bias and Research-praxis Bias in the Learning Analytics Ecosystem.
5. David Lang: Strategic Omission and Risk Aversion: A Bias-Reliability Tradeoff.
6. Shayan Doroudi and Emma Brunskill: Fairer but Not Fair Enough: On the Equitability of Knowledge Tracing.
Kenneth is currently a doctoral researcher in Human-Computer Interaction at Carnegie Mellon University. His research focuses on the co-design and evaluation of AI systems that augment and amplify K-12 teachers' abilities, instead of necessarily trying to replace these abilities. He has also done research investigating industry practitioners' current practices, challenges, and needs for support in improving fairness in AI products used at scale. He has previously presented tutorials and symposia at SIGCSE and ICLS.
Shayan is currently a doctoral researcher in Computer Science at Carnegie Mellon University and a Visiting Student Researcher at Stanford University. His research is in the educational data sciences, educational technology, and learning sciences, with a particular focus on the prospects and limitations of computational models of student learning. One of his interests is studying the equitability of student models learned from data and finding ways to design more equitable student models.