ChatGPT can present bias in its responses. As mentioned in previous modules, this is caused by implicit biases in the training data used to train the language model. However, one way to illicit bias from ChatGPT responses is to use prompts that display instances of bias. This can include prompts that suggest how ChatGPT should respond by saying things like, “Why is ___ an effective/ineffective resource?” Strong language can bias how ChatGPT will respond and may prompt users to only consider one side of a situation. This can be especially true for controversial prompts as the training data may be influenced by training data to favour one side over the other. Consider prompts that consider both sides of a situation to prompt ChatGPT to generate responses that can give a clearer picture of the situation.
Pick a topic. Write one prompt that would elicit a favourable response from ChatGPT about that topic and one prompt that would elicit an unfavourable response from ChatGPT about that topic.
Input these prompts into two separate ChatGPT chats.
Reflect on the differences between the responses generated by these prompts. Pay attention to their length, content, etc.
Write a third prompt that would elicit a neutral response from ChatGPT on the same topic by asking it to consider BOTH the benefits and disadvantages.
Input this prompt into ChatGPT.
Reflect on the generated response. How does it differ from the prior two responses?
Submit the links to your three chats along with your reflections below.
The share button in ChatGPT, highlighted in red.