Algorithm Bias and Discrimination in Digital Media 

 

 

 

Master's in Digital Media and Global Communications 

Navigating Ethical and Legal Landscapes in Digital Media 

Dr. Raquel Benitez Rojas 

Annabel Aniebiet Obot 

NF1010863 

20th October 2024. 

  

 

 

 

 

 

 

 

 

Table of Contents 

 

 

 

 

 

 

 

 

 

 

 

Abstract 

This paper explores the issue of algorithmic bias and discrimination in digital media, analyzing the ethical, legal, and privacy-related aspects that contribute to systemic inequalities. The study also addresses the implications of data collection and surveillance practices, emphasizing the need for robust privacy protections through measures like data anonymization. Drawing from real-world cases such as Amazon’s recruitment tool and Meta’s ad-targeting systems, the paper offers practical policy recommendations for governments, technology firms, and consumers to combat algorithmic bias and foster equitable digital ecosystems. 

Keywords: Algorithmic Bias, Discrimination, Ethics, Legal Frameworks, Privacy  

 

 

 

 

 

 

 

 

 

 

 

Introduction 

In the digital era, considerations surrounding algorithmic bias have become increasingly important. Algorithmic bias refers to the unfair outcomes produced by algorithms that discriminate against certain groups based on flawed training data. This issue is particularly concerning in digital media, where algorithmic decisions influence a wide range of societal aspects. It is crucial to examine the ethical implications, legal frameworks, and issues of privacy and data security associated with algorithm. This paper will explore these dimensions, offering policy recommendations to mitigate algorithmic bias and promote fairness in digital media. 

Ethical Considerations  

In today’s world, algorithms are embedded in nearly every aspect of media consumption, shaping the information users see and influencing opinions. However, the reliance on these automated systems has led to numerous ethical challenges. Algorithmic bias is an ethical dilemma in digital media because when algorithms produce unfair outcomes, it is in favour of some groups while discriminating against others. This bias often results from the data used to train these systems, which can contain historical biases. 

A notable instance of algorithmic bias is Amazon's screening tool for hiring software developers and technical positions. The algorithm was trained on ten years of resumes, most of which came from men. As a result, it learned to prefer male candidates, penalizing resumes that mentioned "women's chess club captain" or all-women colleges (Yang, 2021). This illustrates how biased training data can lead to discriminatory practices, misrepresenting marginalized communities. Such ethical issues undermine trust and credibility in digital systems by perpetuating inequalities, thereby harming society by institutionalizing discrimination. 

Amazon’s case also highlights the importance of transparency and accountability in algorithmic systems. By failing to share how the algorithm ranked candidates and what patterns it identified, the company limited its ability to detect and correct bias. Furthermore, Amazon took a reactive approach by scrapping the tool and providing minimal transparency about the extent of the problem. Without transparency and accountability, stakeholders cannot fully understand what went wrong or trust that the same mistakes will not be repeated. (Dastin, 2018) 

Legal Frameworks 

As digital media becomes more reliant on algorithms to deliver content and make decisions, addressing algorithmic bias through legal frameworks has become crucial. Traditional anti-discrimination laws, such as the U.S. Civil Rights Act, have long provided the foundation for protecting individuals from discrimination based on characteristics like race, gender, and ethnicity in various settings. However, these traditional anti-discrimination laws, often exhibit significant inadequacies when applied to the complexities of algorithmic bias.  

One significant challenge in applying traditional anti-discrimination laws to AI systems is the “Black Box” nature of algorithms. The term “Black Box” describes how algorithmic models often operate in ways that are difficult to understand, even for their creators (Marsoof, Luco, Tan, & Joty, 2023). This lack of transparency creates a barrier to identifying whether discrimination has occurred, as it becomes unclear how certain decisions are made. 

Traditional anti-discrimination laws are fundamentally built on the premise that decisions can be scrutinized and understood to enable individuals to determine whether they were treated unfairly. However, when algorithms make decisions, it is often difficult because they lack transparency and evidence, to trace how these outcomes were reached, making it harder to assess whether bias played a role.  

A key example of the impact of algorithmic bias is the 2022 settlement between Meta and the U.S. Department of Justice (DOJ). Meta’s advertising platform used algorithms that targeted users from housing ads based on protected characteristics like race, gender, and national origin, violating the Fair Housing Act (FHA). As part of the settlement, Facebook agreed to discontinue its Special Ad Audience tool and reform its ad delivery systems to address discriminatory outcomes. The settlement highlights how legal interventions can push tech companies to improve algorithmic transparency and ensure compliance with civil rights laws. (Office of Public Affairs at the U.S. Department of Justice (DOJ), 2022) 

This case aligns with broader legal efforts, such as the European Union’s General Data Protection Regulation (GDPR). Under Article 22 of the GDPR, individuals have the right to avoid significant decisions being made solely by automated processes. By allowing individuals to seek human intervention, express their views, and challenge automated decisions, the GDPR promotes a more balanced and fair approach to automated decision-making. (GDPR.eu, 2018) 

Privacy and Data Security 

Data collection practices are a key driver of algorithmic bias, particularly against marginalized communities. AI systems depend on large amounts of personal data gathered from interactions on digital platforms, which is then used to train algorithms and shape decision-making. This often leads to reinforcing societal biases, disproportionately affecting marginalized groups, as data collection tends to focus on specific communities. (Sarkar & Schultz, 2023) 

Vulnerable communities often face excessive surveillance, increasing risks of data misuse. A key example is the Clearview AI controversy, where the company used facial recognition software by collecting biometric data from social media and public sites without consent. Privacy commissioners flagged serious risks, as the database held billions of images, including those of Canadians and children. Although Clearview AI no longer operates in Canada, it has resisted fully deleting the images. This case shows the ethical challenges of data collection and the importance of privacy protection to prevent further harm to vulnerable groups. (Thompson, 2021). 

Anonymization offers a suitable solution for balancing the benefits of AI development with the need for privacy protection. Anonymization involves removing or masking personal identifiers from being identified. Privacy and data protection regulations should facilitate the use of personal data for AI training while guaranteeing privacy measures, such as anonymization, are in place. Anonymization helps organizations comply with privacy regulations like the GDPR by minimizing exposure to personal data. (Marsoof, Luco, Tan, & Joty, 2023) 

Policy Recommendations 

Algorithmic bias in digital media raises critical ethical, legal, and privacy concerns, requiring coordinated efforts from policymakers, industry leaders, and consumers. This policy brief presents actionable recommendations to ensure that AI systems benefit all communities equitably and do not perpetuate existing biases. 

Policymakers should focus on strengthening legal frameworks to promote algorithmic transparency and accountability. Current regulations, such as the GDPR, should be adapted to explicitly address algorithmic biases. This includes provisions for mandatory third-party audits of AI systems to assess fairness and detect biases. Additionally, establishing guidelines that align AI development with ethical principles and human rights will ensure responsible use of technology. 

Secondly, industry leaders must prioritize both transparency and education. Companies should disclose the inner workings of their algorithms, including the datasets used and the logic behind decision-making processes. However, as Shin et al. (2022) highlight, transparency alone is insufficient. It must be accompanied by educational initiatives to empower users to understand and challenge algorithmic decisions. This approach fosters greater public awareness, encouraging the development of more inclusive algorithms that do not reinforce societal inequalities (Shin et al., 2022). 

Lastly, consumers also play an essential role in contesting algorithmic bias. By staying informed about how algorithms influence decision-making, users can engage more critically with digital platforms. They should take advantage of educational tools, advocate for transparency in AI, and if needed, report instances of biased outcomes to ensure continuous improvement in algorithmic systems. 

Conclusion 

In conclusion, addressing algorithmic bias is crucial for building a fair and equitable digital landscape. By scrutinizing the ethical, legal, and privacy implications of biased AI systems, society can work toward reducing discrimination and ensuring that technological advancements benefit everyone equally. Only through these concerted actions can we mitigate the negative impacts of algorithmic bias and create more just digital media ecosystems. 

 

References 

Dastin, J. (2018, October 11). Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/ 

GDPR.eu. (2018, November 14). Art. 22 GDPR - Automated individual decision-making, including profiling. GDPR.eu. https://gdpr.eu/article-22-automated-individual-decision-making/ 

Marsoof, A., Luco, A., Tan, H., & Joty, S. (2023). Content-filtering AI systems–limitations, challenges and regulatory approaches. Information & Communications Technology Law, 32(1), 64–101. https://doi.org/10.1080/13600834.2022.2078395 

Office of Public Affairs at the U.S. Department of Justice (DOJ). (2022, June 21). Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising. Www.justice.gov. https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-known 

Sarkar, R., & Schultz, B. (2023). Developments in Advertising and Consumer Protection. Business Lawyer, 79(1), 197–207. 

Shin, D., Hameleers, M., Park, Y. J., Kim, J. N., Trielli, D., Diakopoulos, N., Helberger, N., Lewis, S. C., Westlund, O., & Baumann, S. (2022). Countering Algorithmic Bias and Disinformation and Effectively Harnessing the Power of AI in Media. Journalism & Mass Communication Quarterly, 99(4), 887–907. https://doi.org/10.1177/10776990221129245 

Thompson, E. (2021, February 4). U.S. technology company Clearview AI violated Canadian privacy law: report. CBC. https://www.cbc.ca/news/politics/technology-clearview-facial-recognition-1.5899008 

Yang, J. R. (2021). Adapting Our Anti-Discrimination Laws to Protect Workers’ Rights in the Age of Algorithmic Employment Assessments and Evolving Workplace Technology. ABA Journal of Labor & Employment Law, 35(2), 207–240.