NOVEMBER EDITORIAL

Social media spreading bias and hate more than ever

"Social media? Yeah, right! I thought being 'social' meant being nice, understanding, and open minded."

Posted Nov. 2, 2021

By Highlander Staff


Social media has been proven to strengthen people’s personal biases along with letting people express hurtful opinions without any monitoring.

A worldwide increase in the rate of hate crimes and racist behavior has led many psychologists to draw a connection between social media platforms’ confirmation bias and the rise in racist and hurtful behavior. It’s argued that the deliberate design of social media platforms, in which algorithms conform the content seen to the users’ already held thoughts and beliefs, causes people to grow more outspoken in their opinions. Confirmations bias pushes users towards pre-existing views, strengthens bias, and creates polarization.

One example of a social media platform that completely adheres content to a user’s personal opinions is TikTok. The For You Page is completely built based on what videos users like, TikTok then takes that information and the For You Page configures to a “side of TikTok” which could include: alt TikTok, straight TikTok, foodieTok, medical TikTok, DisneyTok, queer TikTok, homestead TikTok, cleaning TikTok, frogTok, paranormal TikTok, conspiracyTok, art TikTok, mental health TikTok, or many others. Based on the content a user likes the algorithm will show them content that matches the already liked content.

In the past twenty years, the usage of social media has steadily increased, causing more and more restrictions to be implemented. Now, all social media platforms have a policy that users must agree to before gaining access to the platform. The policy, normally, is then upheld to some extent, though the more people that post content the less monitored the content is. On many platforms users are able to report content they find offensive or that they think broke community guidelines, which has become what the platforms largely depend on for finding unallowed content. This can be both helpful and harmful for the platform’s monitoring, with self-reported content the platform has help in finding guideline-breaking content but there can also be content reported that doesn’t break guidelines.

One platform that continuously doesn’t monitor or uphold its guidelines is Facebook along with other platforms owned by Facebook including Instagram. It’s been proven that Facebook, worldwide, has failed to monitor the content posted and then take down the content that breaks guidelines. Whether the content includes hate speech, violence, or racism only around 5% is actually taken down even though around 95% is found and marked.

Facebook’s community guidelines prohibit violent threats against people based on their religion, though the actual follow-through of stopping threats based on religion seems to be inconsistent. Holly West, a consistent Facebook user, saw a graphic Facebook post declaring, “the only good Muslim is a f****** dead one,” she flagged it as hate speech. Facebook then declared the post and the accompanying photo, which included a dead person, as acceptable. West then received an automated message stating: “We looked over the photo, and though it doesn’t go against one of our specific Community Standards, we understand that it may still be offensive to your and others.” But Facebook then took down another anti-Muslim comment, a single line stating, “Death to the Muslims,” without an accompanying image after it was repeatedly reported.

ProPublica asked Facebook to explain its decisions on 49 items, which were sent in by people who maintained that the content reviewers had erred and had left up hate speech or had deleted items that didn’t break guidelines. Facebook determined that 22 cases were the result of a reviewers mistake, in 19 it defended its rulings, in six Facebook said the content did violate guidelines but it wasn’t flagged correctly so it wasn’t actually reviewed, in the remaining two cases there wasn’t enough information provided for a response to be given.

Public outcry has increased on the lack of action taken by social media platforms which has resulted in the public eye turning towards said inaction. Social media platforms have begun to put more effort into finding offensive posts and taking them down, but many guideline-breaking posts will remain up. Many social media platforms, including TikTok, have begun taking down content once it’s been reported by a specific number of people and it is then later reviewed. Though this process can take from a couple of days to a couple of weeks, which has caused users to become upset over content being taken down without cause. Many social media companies are finding that they receive backlash when they do take down content as well as when they don’t. Some companies have begun hiring more content regulators to attempt to quicken the monitoring process as well as strengthening their guidelines to make sure that there can’t be any misunderstandings.