"Artificial Intelligence & AI & Machine Learning" by Mike MacKenzie is licensed under CC BY 2.0
Artificial Intelligence is more prevalent in lives today than ever before. Artificial intelligence (AI) is the emulation of human intellect by machines that are able to problem solve. It is seen in areas such as smartphones, web searches, and advertisements. The goal of artificial intelligence, as a subfield of computer science, is to build intelligent robots that are able to carry out human activities such as comprehending language, making judgements, and decision making. AI is able to do its job by using set algorithms to learn and perform tasks, but at the same time can be prone to errors such as bias. The bias is created due to human input on the set algorithms.
This research will focus specifically on how artificial intelligence bias is created and how human input plays a main role in the different types of biases prevalent within AI.
"Algorithm" by Markus Spiske is licensed under the CC0
Although Hollywood movies portray AI as human-like robots, this is not quite the case in the real world, as AI has not completely reached that level. Instead, AI plays a role in many other types of instruments such as smartphones, the internet, navigation systems, and even social media. Alan Turing, a famous modern computer scientist, is known for his concept of AI in which he defines it as “thinking machines” that could reason like a human being.
The AI technology is man made to replicate human behavior and make human lives easier on a day to day basis. Overall, Artificial intelligence (AI) is the development of smart machines that are capable of carrying out operations that require human intellect— such as perception, problem-solving, reasoning, learning, and natural language processing. AI is achieved by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically as it experiences different patterns of features in the data (Artificial Intelligence: What It is and Why It Matters, 2022). AI is a very broad subject but can be categorized into these main fields: natural learning process, machine learning, and deep learning. Natural learning is teaching computers how to understand human language. Machine learning involves algorithms and data for computers to learn patterns and relationships. By doing so, AI is able to predict human behavior and actions. Lastly, deep learning entails making neural networks that learn and make decisions independently such as . Although AI has the ability to simplify work and improve in the future, it also has societal and ethical issues, such as bias in algorithms, that need to be addressed.
"Discrimination Forever" by Diane Yap is licensed under CC BY 4.0
In technology, bias arises when a machine learning algorithm generates findings that are systematically biased as a result of false assumptions (Pratt, 2020). Bias in technology is a growing concern, as more and more technologies rely on machine learning algorithms to make decisions that affect people's lives. When algorithms are biased, they can generate findings that are systematically discriminatory against certain groups of people. This is a significant problem because it could result in unfair treatment or exclusion of those groups.
There are several ways that bias can arise in technology. One common way is through skewed or prejudiced data used to train algorithms. If the data used to train an algorithm is not representative of the entire population, then the algorithm may not accurately represent everyone. For example, if facial recognition technology is trained on predominantly white faces, it may not be as accurate in identifying people with darker skin tones.
Another way bias can arise is through developer subjectivity. If the developers who create a technology have unconscious biases, they can be reflected in the technology they create. This is why it is crucial for developers to be aware of their biases and take steps to mitigate them.
Additionally, bias can arise through the unintended effects of automated decision-making methods. For example, an algorithm designed to screen job applicants may inadvertently discriminate against certain groups of people, such as women or people of color, because of how the algorithm is designed.
The consequences of technology bias can be significant. It can perpetuate social inequalities, exclude certain groups of people from using certain technology, and result in discrimination. To ensure that their products are fair and open to all users, developers must proactively identify and address bias in their technology. This includes using representative data to train algorithms, being aware of their biases, and designing transparent and accountable algorithms. By taking these steps, developers can help to create a more just and equitable society through AI.
"How to Reduce Bias and Become More Objective" by Darya Sinusoid is licensed under CC BY 4.0
AI and Bias associate with one another at the crossroads of programs and algorithms. When AI is created, bias is not far behind inserting itself in. Artificial Intelligence can inherit human biases as humans are the ones who create the algorithms for AI. AI is just doing its job by sorting out data and understanding patterns. In other words, what is inputted as bias, is the outcome. Thus, AI is programmed to be biased if developers create data that contains bias. This is the downside to these algorithms being so smart. They simply learn and have a hard time knowing right from wrong. Although AI can present biases, it is not always intentional or meant to harm. It seems that bias and artificial intelligence are connected, as they move with one another. Right now, this connection is clear and presents a troubling problem. But perhaps as we move into the future of this technology, there will be new ways to separate bias from AI.
"Keyboard, Hands..." by PXFUEL is licensed under CC BY 4.0
AI bias is created and formed largely due to the human input on artificial intelligence algorithms. AI algorithms are created by humans. These algorithms are built to learn, and they are very good at this. However, they learn based on human input. Humans choose which data is entered into an algorithm, basically choosing what research the AI should learn from. This can cause a slew of problems based on the data chosen to be used, which will be discussed later on. Human input can also decide how the results will be applied by the algorithm itself. Even though AI seems fully automated from an outside perspective, there are many parts that include human input. When unconscious or unaware bias sneaks into AI, it can snowball and continue to grow. This is the problem with these algorithms being so smart. They are built to learn and continue to build on the data they are given. This feature comes with its pros and cons.
Training data is a common problem when AI algorithms are being built and in their beginning stages of learning. Training data is data used to build an algorithm before it is tested worldwide. Oftentimes there are flaws in this training data and bias has an easy pathway to sneak in. If the training data lacks different perspectives or a wide range of data, this is when problems can occur. If this bias is not detected early on, it can be built into an algorithm and very hard to detect and eliminate later on. To summarize, AI bias is created by human input and early technological flaws. Humans choose which data is used in the algorithm and humans also input their own data into it. Due to AI’s advanced learning abilities, it can often mistake bias and stereotypes for fact. Dr. Sanjiv Narayan is a professor at the Stanford University School of Medicine and has recently been studying AI and how bias plays a factor in it. He said, “All data is biased. That is not paranoia. This is fact” (Narayan, 2021). It is clear to see that artificial intelligence and its algorithms due in fact contain bias.
"Facial Recognition" by Mike MacKenzie is licensed under CC BY 4.0
As we now know, artificial intelligence algorithms contain bias. But how may these biases and stereotypes affect people, and what do they look like? There are a multitude of forms of bias in AI, for example:
Racial Bias - Racial bias in AI is very common in programs such as facial recognition. These systems have been found to be less accurate for people with darker skin tones due to lack of database images of these people. This leads to a higher likelihood of misidentification and false arrests for people of color. A study done by the American Civil Liberties Union found this to be true, stating that the AI systems failed to tell black people apart from one another (Burton-Harris, Mayor, 2020). This can have major consequences such as incorrect arrests for misidentified people (Brodkin, 2023). Another example of this was actually found in one of the biggest companies in the world. Amazon had an employee screening AI created that aimed to increase diversity in the workplace. However, it was created by a group of majority men and therefore the training data was based on these men. This created problems in the AI as it started to prefer male candidates. This was due to female candidates having possible time gaps in their resume due to possible leaves such as birth. The AI was unable to detect this and did not take it into account. (Narayan, 2021).
Gender Bias - There are many AI systems that aim to create equal opportunity systems for hiring or job search. However, these systems often do the exact opposite. These hiring or decision making algorithms tend to be biased against women. They either tend to prioritize males or penalize women for taking breaks in their careers (such as during birth or after). The Amazon example also falls into this category as it was created by men and therefore the data failed to create diversity of genders.
Socioeconomic Bias - Many AI systems are used for credit scores, mortgaging, and lending. These systems are often biased towards people from low-income backgrounds. They may not have access to the same financial resources and opportunities as others and can be viewed by the algorithm as bad candidates. In addition, many of these systems perpetuate racial bias as well. A study at UC Berkeley found that mortgage algorithms charged latino and black borrowers higher interest rates (UC Berkeley Public Affairs, 2018).
As you can see, there are many different forms of AI that can be found in different areas of technology. Bias can easily sneak into any type of algorithm. It seems that despite the goal of AI removing discrimination, Ai rather perpetuates it in similar ways that humans do, due to the nature of AI creation.
"Security One Cyborg" by Geralt is in the Public Domain, CC0
AI bias can clearly be harmful, especially in a world where the technology is rapidly becoming more accessible, available, and user friendly. This means we must be aware that AI bias exists. The best way to do this is to detect the bias and or eliminate it. This sounds easy but can actually be very complicated. Data scientist Maarten Grootendort said it best when he said, “In order to detect bias, one has to be aware of its existence” (Grootendorst, 2020). We must acknowledge that bias exists before we can attempt to tackle the problem.
There are multiple different ways to attack the problem and attempt to detect bias in AI. One of those is human review. Human review can come in different forms. It could look like AI company employees testing their products and flagging bias that they see, or it could also be from users proofreading the responses and noticing when bias comes out. Then they can choose to avoid using that information. Another popular way to solve bias in algorithms is regulation. A study by the company DataRobot found that 81% of businesses wanted government regulation to define and prevent AI bias (dataRobot 2022). This shows that this is a major problem businesses are noticing and they want government intervention to help prevent it from furthering. Another way the government could get involved is placing fines on companies that continue to put out bias in their algorithms without an attempt to fix them. The company also simply risks their own reputation when they fail to take action.
With artificial intelligence, we can see glimpses of bias in different ways based on the way in which we ask questions, the types of responses we ask for, and in many other instances in which we interact with different types of AI programs. These biases present in artificial intelligence can completely alter responses and can be harmful in many ways.
The first example of how the biases of artificial intelligence can negatively affect people and be harmful is the way the biases can reinforce stereotypes. Stereotypes have been present throughout history and have always been a lingering problem throughout society, as most people find them to be unfair and racist towards certain people. As artificial intelligence is trained based on human interaction, it takes in the good from human inputs, but also the bad. During these human interactions, artificial intelligence picks up the biases that it is fed based on the human inputs. As a result, these AI systems trained on this biased data can reinforce those harmful stereotypes and perpetuate discrimination against certain groups of people.
Going off of that example, the biases of AI can negatively affect people and can be harmful through the discrimination that can be present in AI for certain groups of people. Based on human interaction, groups such as people of color, women, and other marginalized communities in areas like hiring, lending, and healthcare could be negatively affected. This can be harmful in the future when we start to utilize the efficiency of artificial intelligence to our advantage, and people not in these marginalized groups reap the benefits. In contrast, those who are in these groups get fewer opportunities to thrive.
Another example of how the biases of artificial intelligence can negatively affect people and be harmful is the inaccuracy in AI's responses. Artificial intelligence systems with bias can produce inaccurate or incorrect results, leading to poor decision-making and potentially harmful outcomes. For example, as we ask AI for answers to different things, the biases it gains can cause it to give incorrect answers, which can ultimately lead to incorrect decisions that can harm others based on the situation. For example, we can look back at the Amazon example where the company was trying to use AI to find and select candidates to work there. The AI programming, made by a bunch of men, was found to be discriminatory towards women as it chose mostly all men for eligibility for the job.
These different ways that the biases within artificial intelligence can be harmful also bring up the legal and ethical issues that AI can get into with these biases present. AI bias can lead to legal and ethical issues, as some forms of bias may be illegal or violate ethical principles, leading to potential lawsuits and reputational damage for organizations. This can then lead to reduced trust in artificial intelligence as AI bias erodes trust in AI systems and can make people less willing to use or rely on them, limiting the potential future benefits of AI technology.
"Artificial Intelligence & AI & Machine Learning" by mikemacmarketing is licensed under CC BY 2.0
Looking to solve these problems of biases within artificial intelligence, there are a couple of ways to prevent and eliminate biases that have been observed in AI systems. It is important to note that preventing and eliminating biases in AI systems is an ongoing process that requires continual attention and adaptation. By staying vigilant and proactive, we can work to create AI systems that are fair, inclusive, and ethical.
Three ways that we can look to prevent the biases that we see in artificial intelligence are by diversifying human inputs, establishing clear guidelines and ethical principles, and by considering using adversarial training techniques or other methods. Diversifying your dataset helps to ensure that it represents a variety of individuals and groups, including underrepresented populations. Establishing clear guidelines and ethical principles for data collection, data processing, and model development help to ensure that AI systems are designed to respect the rights and dignity of all individuals. Furthermore, using adversarial training techniques or other methods helps intentionally introduce counterfactual examples that challenge the model's assumptions and reduce the potential for biases.
We can look to eliminate biases within artificial intelligence by conducting an audit on the AI system at hand, implementing changes to different datasets, and regularly monitoring and updating the AI. Conducting an audit of your AI system to identify and evaluate any biases or inaccuracies in the results it produces. Implementing changes to your dataset, model, or algorithm to address the biases identified in the audit. Regularly monitoring and updating your AI system ensures that it remains unbiased and accurate over time.
There are many reasons as to why all of these biases within artificial intelligence matter for our future as humans. As we try to move towards the possibility of using AI in everyday life necessities, it will make life easier and more efficient, we need to understand it better and be able to fix the defective parts of it to the point where it does not make mistakes. As of now, there are many positive effects that AI could have on the future, helping us as people be better off, but with this, there are even more harmful effects right now, and that is what we must look to fix in order to use AI the way in which we intend to.
One of the reasons why these biases within AI matter for our future as humans is social progress. AI bias can slow down or hinder social progress, perpetuating existing inequalities and preventing people from achieving their full potential. Another reason is technological advancement, as bias in AI can limit the potential of AI technology to solve complex problems and make breakthroughs in fields such as healthcare, climate change, and space exploration. In the future job market, as AI becomes more prevalent in our society, bias in AI can result in job discrimination, perpetuating inequalities in the job market. We can also see why these biases matter in safety and security. AI bias can have real-world consequences for safety and security, such as biased decision-making in autonomous vehicles or biased profiling in law enforcement. When thinking about the ethics and morals of artificial intelligence, AI bias raises essential ethical and moral questions about the role of AI in our society and how we ensure that AI is used fairly and justly. And lastly, the effects that the biases have and the education and the awareness needed to use artificial intelligence as a whole correctly. We will all need to know about AI systems' potential biases and limitations and how to mitigate these biases to ensure a fair and just future for all humans.
"Artificial Intelligence, AI" by Mike MacKenzie is licensed under CC BY 4.0
In summary, artificial intelligence has fundamentally changed how humans interact with technology and has the potential to drastically alter a wide range of facets of our future. However, we must recognize that AI is not perfect and that it has the potential to reinforce societal biases. These prejudices can have negative consequences for both people and society as a whole, promoting injustice and inequality. These prejudices must be addressed in order to ensure that AI is created and used in an ethical and responsible manner. Increasing diversity in AI development teams, implementing fairness and accountability mechanisms, and routinely auditing and testing AI systems for prejudice can all help with this matter. Ultimately, our future will be shaped by how we respond to AI prejudices, their effects on society, and how well we can take advantage of this tremendous technological advancement.
Max Patenaude, Communications and Marketing Major, Class of 2026
Ronia Clarence, Architecture Major, Class of 2025
Ty Higgins, Finance and Economics Major, Class of 2026
Agarwal, R., Bjarnadottir, M., Rhue, L., Dugas, M., Crowley, K., Clark, J., & Gao, G. (2022). Addressing algorithmic bias and the perpetuation of health inequities: An AI bias aware framework. Health Policy and Technology, , 100702. doi:10.1016/j.hlpt.2022.100702
Al-Khulaidy Stine, A., & Kavak, H. (2023). 4 - bias, fairness, and assurance in AI: Overview and synthesis. In F. A. Batarseh, & L. J. Freeman (Eds.), AI assurance (pp. 125-151) Academic Press. doi:10.1016/B978-0-32-391919-7.00016-0 Retrieved from https://www.sciencedirect.com/science/article/pii/B9780323919197000160
Artificial intelligence (AI) — top 3 pros and cons. (2018). In ProCon.org (Ed.), ProCon headlines (). Santa Monica, CA, USA: ProCon. Retrieved from https://rwulib.idm.oclc.org/login?url=https://search.credoreference.com/content/entry/proconph/artificial_intelligence_ai_top_3_pros_and_cons/0
Artificial Intelligence (AI) – what it is and why it matters. (n.d.). Retrieved March 28, 2023, from https://www.sas.com/en_in/insights/analytics/what-is-artificial-intelligence.html#:~:text=AI%20works%20by%20combining%20large,or%20features%20in%20the%20data
Brodkin, J. (2023, January 4). Black man wrongfully jailed for a week after face recognition error, report says. Ars Technica. Retrieved April 25, 2023, from https://arstechnica.com/tech-policy/2023/01/facial-recognition-error-led-to-wrongful-arrest-of-black-man-report-says/
Chiarella, S. G., Torromino, G., Gagliardi, D. M., Rossi, D., Babiloni, F., & Cartocci, G. (2022). Investigating the negative bias towards artificial intelligence: Effects of prior assignment of AI-authorship on the aesthetic appreciation of abstract paintings. Computers in Human Behavior, 137, 107406. doi:10.1016/j.chb.2022.107406
Countering algorithmic bias and disinformation and effectively ... (n.d.). Retrieved March 28, 2023, from https://journals.sagepub.com/doi/10.1177/10776990221129245
Grootendorst, M. (2020, January 31). How to detect bias in AI. Retrieved March 28, 2023, from https://towardsdatascience.com/how-to-detect-bias-in-ai-872d04ce4efd
Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. Retrieved March 28, 2023, from https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency
Marr, B. (2022, November 08). The problem with biased AIS (and how to make Ai Better). Retrieved March 28, 2023, from https://www.forbes.com/sites/bernardmarr/2022/09/30/the-problem-with-biased-ais-and-how-to-make-ai-better/?sh=1a5608fa4770
Nicol, T. L. (2018). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication & Ethics in Society, 16(3), 252-260. doi:https://doi.org/10.1108/JICES-06-2018-0056
Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10), 100347. doi:10.1016/j.patter.2021.100347
Pratt, M. (2020, July 01). What is machine learning bias (AI bias)? Retrieved March 28, 2023, from https://www.techtarget.com/searchenterpriseai/definition/machine-learning-bias-algorithm-bias-or-AI-bias#:~:text=Machine%20learning%20bias%2C%20also%20sometimes,in%20the%20machine%20learning%20process
Shin, D., Hameleers, M., Park, Y. J., Kim, J. N., Trielli, D., Diakopoulos, N., . . . Baumann, S. (2022). Countering algorithmic bias and disinformation and effectively harnessing the power of AI in media. Journalism & Mass Communication Quarterly, 99(4), 887-907. doi:10.1177/10776990221129245
Siwick, B. (2021, November 30). How ai bias happens – and how to eliminate it. Retrieved March 28, 2023, from https://www.healthcareitnews.com/news/how-ai-bias-happens-and-how-eliminate-it
Tubadji, A., Huang, H., & Webber, D. J. (2021). Cultural proximity bias in AI-acceptability: The importance of being human. Technological Forecasting and Social Change, 173, 121100. doi:10.1016/j.techfore.2021.121100
Cover Photo is "Artificial Intelligence" by sujins is in the Public Domain, CC0