Goal of the unit: The goal of the second unit is to explore the sources of bias in an algorithmic system and the possible solutions for mitigating bias proposed in the literature.
Learning objectives:
To explore the different sources of algorithmic bias in a system (data, algorithm/developer, user).
To learn the different types of bias in specific sources e.g., Web data, social data.
To understand the basic methods for mitigating bias in a system and promoting fairness.
To understand how diversity may affect the fairness of an algorithmic system.
Summary
Mitigating bias in algorithmic systems is a crucial area that drives the rapid development of methods and automatic bias assessment tools for building fairness-aware models. Given the complexity of the problem and the fact that it involves multiple stakeholders, there is a need to understand the differentiation of the sources of bias and the solutions proposed to address them. In this unit, we will explore the sources of bias in an algorithmic system and how diversity can affect the introduction of bias in a system.
The first video lecture by Prof. Fausto Giunchiglia (University of Trento) is entitled “Diversity, Bias and Related Issues”. It presents the sources of diversity and methods to handle diversity to prevent the introduction of bias in a system. Prof. Giunchiglia also mentions that the problem of bias is a consequence of the fact that data generators, application developers, and application users all live in different contexts and, as such, bring diverse perspectives. This diversity in the perspective is the unavoidable source of bias.
The second video lecture by Prof. Jahna Otterbacher (Open University of Cyprus, CYENS Centre of Excellence) presents the landscape of the sources of bias, as well as the solutions being proposed to address them, focusing on the involvement of various stakeholders. In the second part of the video lecture, Otterbacher describes examples of her previous work on auditing proprietary computer vision systems for social biases, positioning the framework for mitigating bias as well as the emerging science of machine behavior.
The third video lecture by Prof. Evaggelia Pittoura (University of Ioannina) presents the various definitions of fairness, the connection of fairness with diversity and how we can achieve data management in fairness-aware social networks. In the lecture, Prof. Pittoura refers to the issues of fairness, bias and diversity and discusses the way that they manifest in social networks.
Six suggested readings accompany the above video lectures. The article by Giunchiglia on diversity and algorithmic transparency is related to the first video lecture. The article highlights the connection between diversity, bias and transparency and the significance of diversity for algorithmic transparency. The article by Orphanou et al. (2021), which is related to the second video lecture, provides a high-level, “fish-eye” view survey of problems and solutions for mitigating bias in algorithmic systems and the stakeholders who are involved in these processes. There are two articles by Pittoura (2020) on Social-minded Measures of Data Quality and on fairness in rankings and recommendations that accompany the third video lecture. The first article, “Social-minded Measures of Data Quality: Fairness, Diversity, and Lack of Bias,” introduces three socially-minded measures: the lack of bias, fairness and diversity for evaluating data-driven decision making systems. The second article, “Fairness in Rankings and Recommendations: An Overview,” presents a solid framework of definitions, models and methods used for ensuring fairness in rankings and recommendations on recommender systems. Two additional articles accompany the video lectures. Baeza-Yates’ “Bias on the Web” and the “Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries” by Olteanu et al., (2019) provide readers with an overview of the complex issues of fairness and bias on the Web and in social media as well as suggested solutions to address these issues.
Orphanou, K., Otterbacher, J., Kleanthous, S., Batsuren, K., Giunchiglia, F., Bogina, V., ... & Kuflik, T. (2021). Mitigating Bias in Algorithmic Systems: A Fish-Eye View of Problems and Solutions Across Domains. arXiv preprint arXiv:2103.16953. Link: https://arxiv.org/abs/2103.16953
Giunchiglia, F., Otterbacher, J., Kleanthous, S., Batsuren, K., Bogin, V., Kuflik, T., & Tal, A. S. (2021). Towards Algorithmic Transparency: A Diversity Perspective. arXiv preprint arXiv:2104.05658. Link: https://arxiv.org/abs/2104.05658
Pitoura, E. (2020). Social-minded Measures of Data Quality: Fairness, Diversity, and Lack of Bias. Journal of Data and Information Quality (JDIQ), 12(3), 1-8. Link: https://dl.acm.org/doi/abs/10.1145/3404193
Pitoura, E., Stefanidis, K., & Koutrika, G. (2021). Fairness in Rankings and Recommendations: An Overview. arXiv preprint arXiv:2104.05994. Link: https://arxiv.org/abs/2104.05994
Baeza-Yates, R. (2018). Bias on the web. Communications of the ACM, 61(6), 54-61. Link: https://dl.acm.org/doi/10.1145/3209581
Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2, 13. Link: https://doi.org/10.3389/fdata.2019.00013
For this unit, there are available two activities, which will enable you to explore the sources of bias in an algorithmic system and the possible solutions for mitigating bias
You can find the activities description and a submission form here.
By taking this Quiz you will be able to assess the knowledge your gain from this Unit.
You will get feedback immediately via Google Forms, once your responses are submitted.