Goal of the Unit: The goal of this unit is to understand the different application areas and domains, where algorithmic FATE might impact our everyday interactions. The main purpose of this unit is to put learners in a position to understand that most if not all of the systems with which they interact on a daily basis have some form of personalization in order to meet their unique needs. For this reason the systems adapt their functionality or their content based on the specific user. To do this, however, the system gathers information about the user. What are the advantages and disadvantages of these systems? How can learners prepare themselves to live with these systems?
Learning Objectives:
To understand bias in Search Engines
To familiarise with personalization in Search Engines
To understand the phenomenon of the Filter Bubble in Search Engines and Social Media
To appreciate how recommender systems work
To understand the pros and cons of recommender systems
To understand what is Fake News, why these are considered as problematic and how they are spread.
To appreciate the advantages, disadvantages and social impact of algorithmically mediated systems
Summary
As we have seen in the previous units, algorithms are already a big part of our everyday interaction with different systems, such as Search Engines, Social Media platforms, Recommendation Systems (e.g., Netflix) and systems that are using Image Analysis (e.g., Facebook image tagging). We have also examined that different individuals understand algorithmic FATE in a different way according to their perception, experiences, cultural background, personality etc. There are different attempts for explaining the decisions an algorithmic system takes on the user’s behalf, however, a lot of work is still needed towards this direction. Hence, understanding how these systems work and being aware of potential biases that may perpetuate is very important.
In the first video, Dr Frank Hopfgartner and Dr Jo Bates will discuss Bias in web search engines and how the results we receive in a search task might shape society. Specifically, this talk will help you understand the basic operations of search engines, identify examples of information bias in search engines, recognize different methods for analyzing this bias and describe how various stakeholders address information bias.
The second video complements the first one by introducing the phenomenon of ‘Filter Bubble’. Research has shown that many users are either largely unaware of the algorithmic filtering/reordering on the platforms they frequently use or may hold very different beliefs about how they work. Personalization processes within information access systems, which automatically filter information based on user characteristics, can limit the exposure to diverse content and create “filter bubbles". In this talk, Eli Pariser will explain how this phenomenon occurs, what is the impact it has on the users and how we can ‘burst’ the bubble.
Moving on, in the third thematic video, Dr Frank Hopfgartner, will introduce Recommender Systems. He will specifically talk about what recommender systems do and the impact they have on our world. He is taking a less academic lens in the topic, providing examples from the real world and why this can be problematic from society’s perspective. A basic introduction to recommender systems is provided, explaining the basic algorithms used for producing recommendations to users and how the user’s profile, or the information that the system is keeping about a user, influences these algorithms.
Dr Anastasia Giachanou provides an introduction to Fake News in the next video, explaining the problem of Fake News, which dated back in the 1960s. However, these days, with anonymity and the vast numbers of users in social media, fake news is propagated much faster and appears to be a huge threat to democracy, justice, public trust, freedom of expression, journalism and economy. We have seen the consequences of propagating fake news in political, economic, medical and social areas of the society. Dr Giachanou will go through real examples and explain how fake news is detected in information access platforms.
Finally, computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use “cognitive services.” Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. In this video, Pinar Barlas presents a study that looks into the extent to which image tagging algorithms mimic this phenomenon with surprising results.
In addition to the videos, there are suggested readings from the scientific literature, where the learner can enhance their knowledge in the above mentioned areas. These publications will act as supporting material where the learner can explore further a topic that might be of their interest using also the bibliographic references provided in each article.
Otterbacher, J., Bates, J., & Clough, P. (2017, May). Competent men and warm women: Gender stereotypes and backlash in image search results. In Proceedings of the 2017 chi conference on human factors in computing systems (pp. 6620-6631). Link: https://dl.acm.org/doi/10.1145/3025453.3025727
Giachanou, A., & Rosso, P. (2020, October). The battle against online harmful information: The cases of fake news and hate speech. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 3503-3504). Link: https://dl.acm.org/doi/10.1145/3340531.3412169
Giachanou, A., Ghanem, B., & Rosso, P. (2021). Detection of conspiracy propagators using psycho-linguistic characteristics. Journal of Information Science, 0165551520985486. Link: https://journals.sagepub.com/doi/full/10.1177/0165551520985486
Barlas, P., Kyriakou, K., Guest, O., Kleanthous, S., & Otterbacher, J. (2021). To" See" is to Stereotype: Image Tagging Algorithms, Gender Recognition, and the Accuracy-Fairness Trade-off. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW3), 1-31. Link: https://dl.acm.org/doi/10.1145/3432931
Barlas, P., Kleanthous, S., Kyriakou, K., & Otterbacher, J. (2019, June). What makes an image tagger fair?. In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization (pp. 95-103). Link: https://dl.acm.org/doi/10.1145/3320435.3320442
For this unit, there are available two activities, which will enable you to understand application areas and domains, such as Search Engines and Image Tagging Algorithms, where algorithmic FATE has impacted our everyday interactions.
You can find the activities description and a submission form here.
By taking this Quiz you will be able to assess the knowledge your gain from this Unit.
You will get feedback immediately via Google Forms, once your responses are submitted.