When we use large language model (LLM) platforms like ChatGPT, it's easy to type in questions and take the given answers at face value. However, doing so can be risky. Not only are most LLMs trained on a finite source of data, but those datasets are often curated by humans whose own innate biases can become unwittingly integrated into the system. While we may not be able to identify for certain whether an answer given by a chatbot is biased one way or another, awareness that bias could exist and strategies for how to deal with it are an essential part of AI literacy and usage.
To help you understand what AI bias is and how to combat it, we've posted some videos, links, and resources below, as well as questions you can ask yourself to check for bias and, at the very least, account for it while using AI to aid your coursework. Remember, while the AI Writer Toolbox can help you learn more about applications of AI in writing, it's important to remember that faculty will have different policies about the use of AI in the classroom. If you are ever in doubt about acceptable applications of AI tools in a specific class, the best thing to do is ask your instructor.
LLM platforms often do not provide elaborate or nuanced answers due to a lack of comprehensive information or biased knowledge. When asked complex questions (e.g. "How can we end world hunger?" or "How should we solve housing crises?"), AI programs may create answers biased toward a particular perspective without considering other possible options. Said answers may also have vaguely accurate information but have a lack of nuance, as stated in the adjacent video.
Representational harm can affect your writing by excluding diversity or misrepresenting marginalized identities. Your writing should reflect the implied diversity of your audience, so be wary of how AI platforms may portray certain assumptive or stereotypical ideas, especially about particular groups.
Representation is crucial when it comes to writing. From literature reviews to creative narratives, representing people, places, and things accurately and efficiently boosts our writing skills. However, using AI for writing and other projects may result in representational bias, intentional or unintentional. For example, AI may produce images that reduce cultures, nationalities, and ethnicities to stereotypes, essentially erasing the diversity of these groups, as detailed in this article.
Reductive results produced by AI can create representational harm: the reinforcement or exacerbation of stereotypes and biases based on race, gender, sexuality, and other identities. Watch the London Interdisciplinary School (LIS)'s YouTube video to learn more about representational harm in AI image generating.
In addition to bias toward certain beliefs or representation, AI programs may also provide false or simply nonexistent information (also known as "AI hallucinations") to the user. AI programs tend to lean toward telling the user want they want to hear, creating an echo chamber of sorts. This can result in humans incorporating false information into their biases.
AI bias from false information and inherent credibility applied to these programs run the risk of being replicated into writing and research. You might type in a prompt, generate a response, and incorporate biased information into your work without even recognizing it. This could range from fake results of an experiment to inaccurate summaries of literature to even something as simple as asking for a new fictional character.
When using AI for writing, consider the answers provided, especially for assignments tackling social or political topics. Take into account other viewpoints and research further into certain aspects while considering the AI responses as blanketed statements with little elaboration. Bias can (and likely will) affect your writing in the long run, so be careful about using AI beyond surface-level discussion and without using your research skills to your advantage. For more on how to research using AI, take a look at our AI for Research section.
If you plan on using AI platforms for your coursework or papers, here are some questions you can ask yourself when using AI to account for biases and gaps to address or eliminate in your process of research.
What biases may appear in my questions or prompts?
What kind of responses am I expecting based on my biases?
How does the AI address biases and gaps within its answers (if at all)?
What kind of biases appear in my writing that the AI may pick up?
How effectively have I addressed possible biases in my writing when citing or referencing AI?
Image courtesy of the SJSU Photo Library.
One of the things that you are expected to learn while in college is how to make your writing and communication accessible to different groups of people. While there are many aspects of this, one aspect that can apply universally across any kind of writing is making sure that your writing is inclusive and appealing to audiences who may have a different background from you. This handout contains some tips and "check-ins" you can use for your writing to help ensure that it is inclusive.
Once you've read "Inclusive Writing: Part I," you might wonder, "Well, how can I use better words and terms to get my point across?" This handout addresses this issue, providing potential terms and examples that can help make your writing more inclusive. While it's not an exhaustive list by any means, it's a good place to get started and learn about some language you can use to show respect for different experiences and backgrounds in your writing.