With the public access to AI that is now available, it is coming out that AI is inherently transphobic. This poses a risk to the progress already made and further progress coming for the acceptance of the trans community.
AI pulls from sources that are already on the internet. With a society that only supports using chosen names only 47%, according to Pew Research Center, a large percentage of the sources out there are transphobic. This is extremely harmful as AI is used to produce ideas, images, videos, and more. If the basis of these produced results are biased against LGBTQIA+ individuals, the oppression of these communities is in turn further perpetuated.
Not only is AI used to produce many writings, videos, and concepts for individuals and groups, but it is also used to talk to humans through phone calls or online chats. AI causing harmful effects isn’t exclusive to the trans community. NPR stated that the National Eating Disorders Association had to take down a chatbot of theirs after it offered an individual dieting advice. These same AI-based technologies are used on many phone lines at places like medical offices, banks, and more. Open Global Rights writer Ilia Savelev talks about how calling the bank and AI’s voice recognition system is not able to recognize his voice because it isn’t “male enough” is an average transgender experience.
A misgendering – possibly due to voice recognition – even occurred during our group’s interviews. Our group used Otter.ai to record and transcribe our interviews. At our interviews, Alyssa Zuesi spoke as the interviewer. She uses she/her pronouns, but introduced herself before the interviews started, so this was not on the audio file. In the second interview, she spoke to Esme Miranda, who goes by they/them pronouns and even vocalized this in the interview. Despite this fact, AI put “Women” in the summary when referring to the speakers. This is an example of AI misgendering someone because Esme does not identify as a woman. Occurrences like this can be really hurtful to the self-esteem of these individuals. And, if our group had relied on Otter.ai’s summary, that would have led to us misgendering Esme as well. Luckly this was not the case for us, but it would be easy for people to rely on AI in other situations and then this could lead to misgendering.
In addition to voice recognition being an issue, facial recognition is as well. As mentioned by LGBT Tech, “one study found that in over 30 years of facial recognition research, a binary model of gender was followed more than 90% of the time and treated as immutable in more than 70% of studies. The result is a technology that frequently misidentifies or misgenders, making both the digital and physical worlds less inclusive and less safe.” A lot of facial recognition software is used to figure out personal information and then send targeted advertisements to you. When this software assumes incorrect data about an individual, it will in turn send advertisements to them that are not geared for them. For example, a transman may inaccurately be read as a woman by AI and get advertisements for bras. This consistent reminder would be hurtful to the individual’s mental health.