Bigotry, Harassment, and Radicalization on Tech Platforms and the Workers Tasked with Moderating Them

Description

In this session of Learning Club, Rebecca Lewis will join us for a discussion of her research.

In the early and mid-2000s, tech platforms such as Facebook, Twitter, and YouTube were hailed as “liberation technologies” that could help democratize media and spur egalitarian movements throughout the world. In the wake of the 2016 U.S. presidential election, this view has faded considerably, as it has become clear that social platforms are valuable tools for disinformation, propaganda, and radicalization. Despite the recent focus on these issues, they are not new: from the early days of the internet, white supremacists and reactionary groups have used the tools of digital media to spread their messaging and recruit new members.

Leading up to the 2016 election, a loosely connected network of white nationalists (the alt-right), “gamergaters,” men’s rights activists, and conspiracy theorists manipulated social media platforms and mainstream news media outlets to spread disinformation and recruit new members. While much of their activity takes place in the so-called “dark corners of the internet,” such as 4chan and 8chan, these groups rely on mainstream social media platforms to broadcast their ideas to wider audiences and reach greater levels of influence.

At the same time, reactionary and far-right content creators have adopted the tools of social media influencers to build an alternative news ecosystem on YouTube. Using vlogging, personal branding techniques, Search Engine Optimization strategies, and direct interaction with their audiences, these influencers have effectively propagandized to viewers and helped radicalize young people (and normalize bigoted ideas) in the process. In this ecosystem, a range of mainstream conservative and libertarian influencers, self-help gurus, and gaming streamers can ultimately act as gateways to more extremist content. YouTube’s algorithm helps along this process, but so do its social networking capabilities and its monetary incentives for content creators.

In addition to providing a platform in which hate and harassment can thrive and spread, these corporations are hiring thousands of workers to moderate the most offensive content. These workers receive little to no support from their contracting agencies, and much less from the name-brand companies for which they are working. They are underpaid, undervalued, and overworked — with barely enough time for restroom breaks through out their workday, not to mention a lack of counseling or therapy for enduring the traumatic content they moderate daily. In an information ecosystem filled with racism, misogyny, harassment, and disinformation, how can we grapple with the role of social media platforms and the implications for content moderators?

Readings

Discussion Questions

  1. What are the features and qualities of technology platforms that make them susceptible to manipulation from harmful actors? What are the features that incentivize these types of behaviors?
  2. How should technology platforms negotiate their roles as gatekeepers of the public sphere? What responsibility do tech companies have to curb the racism and harassment that occurs on their platforms?
  3. How do we begin to grapple with issues on platforms that extend beyond algorithms and other technological fixes?
  4. What might solutions to these issues look like? What are the labor implications for these solutions, particularly those that rely on increased content moderation?

Additional Resources