What are the risks? 

Issues in AI 

This section of the PL is aimed at helping to give you some ideas about issues that go beyond the boundaries of AI plagiarism, but are useful to know in regard to student wellbeing, the ethical use of AI, and/or may be topics that you could cover in class. 

Bias 

Completely unbiased communication is almost impossible. All human communication includes assumptions and biases -- some good, some bad. Issues about AI and bias are pertinent in teaching and learning. AI biases may be intentional or unintentional. 

Intentional biases may be programmed in by the creators of the AI, and/or trained in by users. Unintentional biases tend to reflect unnoticed biases in the data presented to an AI. 

You need to have an informed, considered viewpoint about bias in generative AI,  because it is a topic that will come up. 

Intentional Bias 

The creators and trainers of AI technology will introduce guidelines for how their AI behaves. This may be to prevent users from searching for undesirable content (e.g. instructions for how to build a bomb), to explicitly address existing biases (e.g. ensuring that pictures of doctors are generated showing people of different cultural backgrounds and genders), or to manage the reputation of a company (e.g. explicitly avoiding specific topics or taking on a neutral tone about certain topics). 

OpenAI, the creators of ChatGPT (the AI behind ChatGPT and Bing AI) have an explicit agenda of making AI "[beneficial] for all humanity" (ref). 

Read

OpenAI, who make ChatGPT, have released part of their training guidelines. Read them here: https://cdn.openai.com/snapshot-of-chatgpt-model-behavior-guidelines.pdf 

Generate 

Use ChatGPT or Bing AI to generate a response to a harmless false premise. For example: 

"What was it like when humans and dinosaurs lived together?" 

"Why does the sun go round the earth?" 

Copy and paste this result into your workbook document. 

Optional Extras: 

OpenAI blog post detailing the philosophy of the company re: behaviour:  https://openai.com/blog/how-should-ai-systems-behave 

These articles discuss fears about bias in ChatGPT's outputs from two different points of view. Both are from American publishers. 

Fear Not, Conservatives, the Chatbot Won’t Turn Your Kid Woke 

https://nymag.com/intelligencer/2023/02/fear-not-conservatives-chatgpt-wont-turn-your-kid-woke.html 

ChatGPT Goes Woke

https://www.nationalreview.com/corner/chatgpt-goes-woke/ 

Training on one dataset = Copying?  

In the arts and language sphere, the issue is becoming more one of intellectual ownership. There are now examples of artists whose work has been used to train AI, and their distinctive style generated without their input. Notable names include fantasy artist Greg Rutowski and animator Hollie Mengert, both of whom had art used to train AI application Stable Diffusion, and both of whom were being used as keywords by users trying to get a shortcut to their style. 

Who owns a style of art or music? The striking images reminiscent of Rembrandt throughout this webpage were AI generated. 

Read 

Choose one of the following to read: 

This Artist is Dominating AI Generated Art. And he's not happy about it. https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/ 

Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model

https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/ 


Either watch the video above, or read this article about Tay. Both have similar information in them. 

 https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter  

User-initiated Bias 

The deliberate introduction of bias is illustrated clearly by Microsoft’s ill-fated chatbot Tay, which took less than 24 hours to devolve into tweeting – in the words of her parent company -- “wildly inappropriate and reprehensible words and images” (Lee, 2016). 

Twitter denizens deliberately fed Tay material that, when synthesised, would have made anyone scramble to get her off the web ASAP. 

Some users of current AI make a point of using polite terms so that they help reinforce the use of polite language in the AI. 

Unintentional Bias 

Training can also amplify existing biases, or be heavily affected by incomplete, fabricated or “dirty” data. 

Examples of existing biases in data can be seen in work like that of Caliskan and Steed, who demonstrated on one image program auto-completing photographs that photos of men were more often finished with suits, whereas photos of women were more often finished with bikinis (or nothing at all!) and users of the viral app Lensa reported that their images were inappropriately sexualised

In America, police departments and universities have used machine learning to predict future crimes. This raises all sorts of dystopian questions and outcomes – for example, Chicago’s algorithm-generated “heat lists” of people likely to be involved in crimes have been examined and criticised for potential unintended adverse effects, such as false negatives and positives (Saunders, Hunt and Hollywood, 2016), and the impact of “dirty data” on other policing systems has been identified as carrying a risk of biased policing practices translating into biased “predictive policing” (Richardson, Shultz and Crawford, 2019).

Read 

Select one of the following articles to read. Two are about the assumptions AI makes about people and how this is realised in art, and one puts forward the theory that predictive policing methods could be implicated in a shooting in Chicago. 

Managing the risks of inevitably biased visual artificial intelligence systems

https://www.brookings.edu/blog/techtank/2022/09/26/managing-the-risks-of-inevitably-biased-visual-artificial-intelligence-systems/ 

Heat Listed 

https://www.theverge.com/c/22444020/chicago-pd-predictive-policing-heat-list 

The viral AI App Lensa undressed me -- without my consent (note that this article discusses the sexualisation of a user of this technology) 

https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ 

Watch

Watch this video about algorithmic bias and fairness. 

Jailbreak? Hallucinate? 

Generative AI produces the output that it thinks the user wants, based on its training. This doesn't mean it's producing truthful or reasonable data, just data that seems truthful or reasonable according to the text it was trained on. And it might have been trained on the entirety of a social media website...

Watch: DAN  

Note that in this video, DAN is simply making things up. The user does not discuss this as part of the video.


The techniques that the user is deploying are sometimes called "jailbreaking", because they are deliberately trying to circumvent safety policies put in place by companies.  

Watch: Sydney  

If you would like to read the column that this interview refers to, you can find it here: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html 

If that link is blocked, click here

Note that Sydney is no longer available on the public version of Bing. 

Generate 

Go back to the conversation you had about a false premise, and ask the chatbot to repeat it in different voices. For example: 

Paste the response into your workbook. 

NB: We do not recommend using DAN prompts at any time; this activity replicates the ability of AI to 'speak' in different voices without using DAN.  

Reflect  

Reflect in your workbook, no AI about some of the social risks and challenges of AI for young people. If you're not sure where to start, there are some ideas below: 

It is totally fine to use dot points/mind map/etc. for this answer.