Ethics is societies rules on how people should and shouldn't behave. There are no legal punishments for behving unethically but society may punish you by avoiding your products or shaming you.
The trolley problem is a very famous ethical thought experiment.
Imagine there's a runaway trolley speeding down a track.
Ahead on the track, there are five people tied up who can't move. You are standing next to a lever.
If you pull the lever, the trolley will switch to a different track, but there's one person tied up on that track too.
So, you have a tough choice:
Do nothing, and the trolley will hit the five people.
Pull the lever, and you'll save the five people, but it will cause the trolley to hit the one person on the other track.
This problem makes us think about what is the right thing to do. Is it better to not do anything and let the trolley hurt five people? Or is it better to pull the lever, save those five, but then you've made a choice that hurts one person?
It's a tough question and makes us think about how we make hard decisions.
An interactive version of the trolley problem can be found here.
Artificial Intelligence (AI) is being adopted into society faster and faster, since 2020, global adoption has more than doubled from 20% to 47% in 2025, if this rate continues it will be a 94% global adoption by 2030!
So AI is brilliant then, yeah?
No.
Artificial Intelligence is a fantastic tool to speed up tasks that the human using it could do but doesn't need to, thus freeing the human up to take that information further by adding context and understanding that the AI doesn't have.
Case Study: Instagram
An investigation revealed that Instagram's algorithm prioritized images showing more skin, which can negatively impact young users' self-image and promote unhealthy standards of beauty.
Algorithmic bias is when the Artificial Intelligence learns human biases through the data they are taught from and then continue the bias and discrimination in their decision making.
The adoption rate of AI globally means that Artificial Intelligences are being used to make decisions about real people that have real consequences, like how long a prison sentance should be, whether someone should get financial support or loans or even, whether they are invited to interview for a job.
Is it ethical that a computer, without context or empathy, can make these decisions?
Case Study: Amazon Job Applications AI
In 2018 Amazon scrapped an internal project that was trying to use AI to vet job applications after the software consistently downgraded female candidates.
Because AI systems learn to make decisions by looking at historical data they often perpetuate existing biases. In this case, that bias was the male-dominated working environment of the tech world. Amazon’s program penalized applicants who attended all-women’s colleges, as well as any resumes that contained the word “women’s” (as might appear in the phrase “women’s chess club”).
When the company realized the software was not producing gender-neutral results it was tweaked to remove this bias. However, those involved could not be sure other biases had not crept into the program, and as a result it was scrapped entirely.
Censorship is when someone in charge, like a government, school, or company, decides to block, remove, or control certain books, movies, music, websites, or ideas so that people can’t see or hear them.
It’s usually done to stop things they think are harmful, offensive, or inappropriate — but sometimes it also means people don’t get to see different opinions or information.
Article 19 of the Universal Declaration of Human Rights states that everyone has the right to seek and receive news and express opinions.
Case Study: Donald Trumps Twitter Account
After Donald Trump lost the election to Joe Biden in 2020, he tweeted that the election was rigged and that his followers should storm the capital building (the place where laws are made in the US) to check for themselves if he lost.
As a result, his followers did exactly that and to prevent Trump from saying anything else, Twitter decided to suspend his account.
This may seem like a good idea but the issue is, at the time, Donald Trump was the president of the United States of America, a position held as the most powerful politician in the world, as such, Twitter decided that they had the power to censor him.
Case Study 2: Healthcare Professionals on Twitter during the 2020 COVID Pandemic
Some healthcare professionals and scientists faced censorship if their views were considered controversial or against prevailing public health advice.
They were censored through:
Tweet Removal: Twitter removed tweets that were flagged as misinformation or harmful advice about COVID-19.
Account Suspensions: In some cases, users repeatedly sharing what was deemed as misinformation had their accounts suspended temporarily or permanently.
Warning Labels: Tweets containing information about COVID-19 that Twitter found questionable might receive a warning label advising readers to check the facts.
Reduced Visibility: Some posts were made less visible in Twitter feeds and search results, a practice known as "shadow banning."
The damage this caused was:
Limiting Free Speech: Critics argue that such censorship can limit free speech and the open exchange of ideas.
Trust Issues: It may create distrust in social media platforms and public health messages, especially if people feel important views are being suppressed.
Echo Chambers: It can contribute to the formation of echo chambers where only one side of the story is heard.
Impact on Public Health: In some instances, it may have prevented the spread of harmful misinformation, but in others, it might have delayed the discussion of legitimate concerns or alternative views on COVID-19.
The metaverse is said to be the “next version” of the internet. Three-dimensional virtual worlds where you can spend time together with friends in real time: playing games, completing tasks and even earning money from other people in the metaverse. Many people believe that the metaverse will be one global world that exists alongside the real world, incorporating augmented reality, virtual reality and the real world.
The metaverse poses a range of ethical issues:
Virtual Crime - if a virtual person steals a virtual item from another virtual person, has anything actually been stolen?
Who is in charge? - in real life, there are governments, police, judges and bosses who are in charge, do these exist in a virtual world?
Privacy - We already face privacy issues all the time online as real people, if we then had a virtual version of ourselves, we have twice the amount of data to keep private.
Addiction - People are very easily addicted to their devices like mobile phones, if this addiction evolved to include a virtual world, will they lose sight of reality?
Inequality - The digital divide is a term used to describe the gap between people who have access to the internet and those who do not. Will this be made even worse by the intoduction of a virtual, online world that only those with the priviledge of access to the internet can access?