Facebook’s Accountability Crisis: The Factors Influencing Modern Digital Censorship

by Joseph Gabriel Brasco 


Facebook and Big Tech companies have a problem on their hands: how can they simultaneously uphold the free expression of billions of users while limiting the proliferation of destructive content? The main governmental protection allowing social media companies to police content is Section 230 of the Communications Decency Act of 1996. Facebook has leveraged Section 230 to create a near monopoly on the digital dissemination of information. The dual “sword and shield” prongs of Section 230 ensure that Facebook, and other companies who claim protection under 230, can censor information at their discretion while also enjoying the protections of a neutral platform provider. As it stands, Facebook holds too much power in the digital age thanks to the ill-defined guidelines set in law by Section 230. If we wish to see a clearer and more free landscape of online communication, Section 230 must be better understood and regulated.



Introduction


In 2019, Facebook (now “Meta”) founder and CEO Mark Zuckerberg delivered a speech to an audience at Georgetown University.1  The speech centered on the issue of balancing free speech and content moderation policies. Zuckerberg began this speech by touting an example of how his company had successfully suppressed the proliferation of terrorism on his platform.2 Content involving acts of terrorism fall under the umbrella of what Zuckerberg classifies as “real danger.”3  That is, content that leads to or showcases violence or danger in the real world.4  He also provided other examples of the types of content he seeks to remove from his platform, such as self-harm and suicide being broadcasted by users.5  These extreme cases represent types of content that most users would agree should not remain on the platform; the problem arises when Zuckerberg shifts the focus from terrorism and suicide to topics such as harmful medical advice. He gives an example where a user gave misinformed advice on what to do when having a stroke, which hardly seems comparable to sharing content of terrorism and suicide with the world.5  The dilemma of those in charge of social media content moderation lies in drawing the line with regards to what sorts of content will or will not be tolerated online. Unfortunately, the lines that exist in the modern landscape of digital censorship have become painfully vague, and cause confusion and frustration for both users and moderators of these massive platforms.


When he gave this speech, Mark Zuckerberg, perhaps inadvertently, presented many of the pitfalls he and other social media company leaders have encountered when deciding their platform’s content moderation policies. In one breath, Zuckerberg praises how his platform has “decentralized power by putting it directly into individuals hands,” while also explaining the need for certain individuals to be censored.7 On its face, it seems hypocritical of Zuckerberg to brag about how his service gives the little man a voice, yet have his platform silence the same voice when he steps over a poorly defined line. To avoid oversimplifying this dilemma, it must be considered that Facebook contends with the dizzying prospect of the unfettered expression of millions of users, some of whom will choose to share reprehensible ideas or content. Currently, Facebook is reckoning with two competing objections: one, the obligation to protect the free expression of its users, and two, restricting the expression of those who use that same freedom to do serious harm. Zuckerberg attempts to provide an answer for these dueling concerns through his “real harm” definition. With this definition, Facebook has attempted to make a distinction for content that is objectionable enough to warrant removal from the platform.8 


An integral aspect to the debate over censoring digital content is deciding who is ultimately responsible for the harm done: the platform who hosts the content or the third-party user who created it? This issue is at the core of Section 230 of the 1996 Communications Decency Act, a key component of  the modern discussion of social media censorship.9  The issue of who is ultimately responsible for the content posted on social media is one of heated debate, with some maintaining the user is ultimately responsible for their own content, and others holding the feet of social media companies to the fire. Section 230, however, provides protection for Facebook to operate as a platform, a classification of sites which are not responsible for third-party content.10 Thus, to hold social media companies responsible for censoring third-party content, Section 230’s original language would need revision.11 

Facebook and other Big Tech platforms should not continue to exist in an ambiguous limbo where they blur the lines on their identity as a platform or a publisher. Facebook and other Big Tech companies, who hold a near-monopoly on the dissemination of information, enjoy the protection of a platform while using the editorial control of a publisher. Currently, Section 230 allows Big Tech platforms like Facebook to censor third-party content so long as the content is deemed “obscene, lewd, lascivious, filthy,” etc.12  One obvious issue with this measure is the fact that the meaning of what is obscene, lewd, lascivious, or filthy can take on various definitions depending on who you ask. When this question is up to a single entity to decide, the whims of Facebook steer the public discussion unfairly; Silicon Valley does not hold the opinions of every American. For change to happen, there must be reforms to Section 230, which currently allows Facebook to act simultaneously as platform and publisher.

A Brief History of Media Censorship


Balancing freedom of speech and the moderation of content is not an issue unique to our current moment. In his article How to Fix Social Media, Nicholas Carr outlines historical precedent for governmental regulations of media.13  This issue of “real harm” mentioned by Mark Zuckerberg could be seen all the way back to the time of radio broadcasters covering the sinking of the Titanic. Carr outlines how amateur radio messengers cluttered the airwaves following the sinking of the Titanic, causing confusion among civilians and responders as to what had happened. Radio users interrupted search and rescue efforts of the ship claiming that it remained afloat, and were punished for it.14   The Radio Act of 1912 was passed, creating provisions that those broadcasting false information would be fined, as well as limiting amateur radio users to lesser bands of radio communication, reducing their sphere of influence.15  In this instance, governmental forces stepped in to curb the free speech of amateur radio users in the name of the public good. Carr also highlights an example where a citizen made an FCC suit against a radio DJ who played an obscene bit by comedian George Carlin on air in 1973.16  This case found its way into the U.S. Supreme Court, where Justice John Paul Stevens ruled that since DJs occupy a “pervasive presence in the lives of all Americans,” their speech requires extraordinary considerations as opposed to the private speech of average Americans.17


As the internet became more accessible to the average person, issues arose with regards to who was responsible for online content, the content creator, or the host of the website? One of the earliest examples of this distinction being an issue emerged when the site Prodigy was sued in 1994 for fraudulent financial information posted to their message board by third-party users.18 Although Prodigy was not responsible for developing or even encouraging the creation of the fraudulent content, they lost a case before the New York Supreme Court in 1994.19 The Court decided that since Prodigy had previously censored explicit language on their message board, it removed them from the classification of a neutral platform provider into that of a publisher. This distinction between platform and publisher 20 was the key issue addressed in Section 230 of the Communications Decency Act.


Section 230 is the main legal clause governing the ability of internet firms to moderate digital content. Many of the disagreements over the rights of internet firms to crack down on content are contained within the “sword and shield” prongs of the clause.21  These two prongs frame the modern understanding of an internet firm acting as either a platform or publisher. Ellen P. Goodman and Ryan Whittington explain this concept in their article outlining Section 230, explaining that “unlike distributors, publishers exercise a high degree of editorial control. The upshot is that if the New York Post publishes an author’s defamatory article, it is legally responsible for the publication of that content, while the newsstand owner who sells and distributes copies of the paper is not.”22 Platforms like Facebook are able to exist without being held responsible for obscene content posted onto their site by maintaining that they did not make the content, they simply provided the platform where it was shared. If their case had been tried following the inception of Section 230, Prodigy likely would have been spared a defeat, as they could have accurately claimed that the content was the responsibility of third-party users rather than their own. Key to the Prodigy case, though, is the court’s decision that their previous censorship of explicit language removed them from the realm of a platform and made them a publisher. This distinction has been blurred following the inception of Section 230, with platforms now abusing the wording of the law to be protected as a platform while exercising editorial control.

The Section 230 Dilemma


It has been argued that legislating responsibility for online content is necessary in a time where the freedom to upload whatever content one wishes has been granted to billions of users. The power social media has given to users to reach the masses is explained in Jessica Taylor Piotrowski and Patti Valkenburg’s work Social Media.23 They discuss the concept of “affordances,” which are defined as “the possibilities that objects in our environment offer us.”24 The current landscape of social media was defined well by Mark Zuckerberg, when he explained how his platform decentralizes power and gives it to the individual, who now have the voice and reach equal to that of large corporations. 25 Piotrowski and Valkenburg outline the specific affordance of scalability, which is defined as the power for users to “choose the size and nature of their audience.”26  This definition has become more complicated due to the advent of algorithms and the potential for content to go viral online. Facebook offers the potential for what you perceived to be an intimate post to become the business of millions. The ability for content on social media to spread like wildfire presents challenges for those trying to limit the reach of destructive material.27 The same reason why social media has become so successful, its ability to allow users to reach massive audiences, has emerged simultaneously as its greatest challenge; what happens when users reach massive audiences with destructive material?


The “sword and shield” prongs of Section 230 were meant to protect neutral internet platforms from having to take responsibility for content they did not create, as well as allow for sensible removal of clearly objectionable content. One of the main drafters of Section 230, U.S. Senator Ron Wyden, outlined these two prongs, and the unique protections they grant internet firms. 28 The “shield” prong states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”29 In other words, platforms who simply provide a place for others to post their own content are not to be held responsible for what is posted on their site. This is the aspect of Section 230 that most are familiar with; the protection for services to act as neutral platform providers free from the responsibilities of a publisher. When lawmakers like U.S. Senator Ted Cruz call out Mark Zuckerberg in hearings, they are often pointing out the fact that Big Tech companies who censor users are acting like a publisher and should therefore forfeit their protections as neutral platform providers.30  While claims like these have a strong appeal, the legal argument is undermined by Section 230. There remains the “sword” prong of Section 230.


The “sword” prong of Section 230 insulates platform providers from liability stemming from “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considered to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”31 Despite the plethora of adjectives, this clause is suffering from a lack of clear description. To reference back to Mark Zuckerberg’s warning of dangerous content on his platform, his scale of harmful content ranged from terrorism and suicide to misinformed medical advice.32 Although each has the potential to do harm, there should be a distinction made between deliberate violence and an instance of factual inaccuracy.



The power granted to internet-firms through this prong of 230 has fallen victim to the times, as it would have been impossible to picture the reach of a single entity like Facebook in 1996. Now massive firms with a near-monopoly on digital information also have the power to throttle content at their discretion while simultaneously enjoying the protections of a neutral service provider. The problematic wording of Section 230 seems to give platform providers the final word in what can be lumped into the category of objectionable content. Additionally, challenging the “good faith” standard for removing this type of content is a difficult proposition; how can one truly determine the intentionality behind Facebook censoring a piece of content? If those who wish to see change in the way social media censors its users, they should not simply focus on the wrongdoing of Big Tech platforms like Facebook, but also the broad protections of Section 230 they use to manipulate users free of consequence.33


While many concerned users chastise Facebook for acting too much like a publisher, there is also a contingency of those who urge them to take more editorial control in the name of the public good. Authors like Benjamin Cramer make the argument that since Facebook has provided the platform for users to harm others, they should own up to and fix the problems their presence has indirectly caused. Cramer defends this position here:


Recent tragedies and controversies-ranging from the use of Facebook to organize the mass murder of the Rohingya people of Myanmar, to the rise of far-right xenophobia in America, to the live streaming of a terroristic mass shooting in New Zealand- have inspired calls for more responsibility among social media platforms, along with frustration when those companies cite the liability shield of section 230.34



Although Cramer’s examples are chilling, and the fact that malicious content has the potential to be widely proliferated online is sobering, it is a dangerous proposition to encourage social media companies to become more autocratic. Urging social media platforms to exercise the sword provision of Section 230 while maintaining their identity as a neutral service provider grants them too much power. Platforms like Facebook should not be able to exercise editorial control while maintaining the protection of a neutral platform provider. Cramer’s intention to protect those who have been harmed due to online content may be pure, but his solution to this problem will only result in a further forfeiting of our freedom into the hands of a powerful few who run these Big Tech platforms.


Battling the force that is Facebook is difficult considering how many of their censorship wrongdoings are actively withheld from the public knowledge. The act of keeping internal action secret from the general public has been referred to as creating “black boxes.”35  This concept was brought into the realm of digital media in Philip Di Salvo’s Leaking Black Boxes, which outlines alleged instances of Facebook using these practices. He elaborates on the black box concept by quoting Bruno Latour, who describes them as being when “technical work is made invisible by its own success.”36 It is inarguable that Facebook has been successful, and the sheer amount of content proliferated on its site has made the prospect of moderating all of it daunting.


Nevertheless, while it is understandable that some content will always slip through the cracks, accounts from former employees at Facebook cast a suspicious light on how content moderation is really handled behind the scenes of their black boxes. Before being ousted from the company in 2020, Facebook employee Sophie Zang posted a 6,000-word farewell letter to the company’s internal communication channel claiming that Facebook knowingly refused to censor harmful content.37 She claimed that “Facebook has allowed major abuses of its platform in poor, small and non-Western countries in order to prioritize addressing abuses that attract media attention or

affect the U.S. and other wealthy countries.”38 Although Facebook denied these accusations, it does serve as a cause for concern for allowing this single entity insulated from responsibility to be the judge of what deserves to stay online. Di Salvo points out that although Facebook is a private company, it has become the “equivalent of a public square” in our digital age, serving a “de facto civic function.”39  If it is the case that Facebook has become analogous to an example of the modern public square, then it should not be the case that they are the ones who dictate the free flow of information among the general public.

Facebook’s Oversight Board: A Disingenuous Solution


Some of the pressure of being held responsible for all the content posted on Facebook has gotten to Mark Zuckerberg. In 2019, Facebook announced that it would be instituting an external Oversight Board that will oversee and make the final call on decisions of content moderation by Facebook.40 Author Kate Klonick described the oversight board as serving “an oversight capacity for Facebook’s appeals process on removal of user content, as well as in a policy advisory role.”41 According to Facebook, they are “bound by all decisions by the board,” as well as “required to give a public explanation as to why it has or has not decided to follow that recommendation.”42 This second point would seem superfluous if indeed Facebook was being transparent in its commitment to being bound fully to accept the decisions of what Zuckerberg refers to as Facebook’s “Supreme Court.” If this were really the case, why would a provision for when they forgo the Board's decision be necessary?43 Facebook, by instituting this review board, has made it so they are able to choose to edit content, then shift the backlash and responsibility of a final decision onto its external review board. In effect, the external board can be viewed as the ultimate scapegoat for Facebook, a perfect institution onto which they can dump the consequences of their decision making. The board is explained as being “a board of 40 people at most who will meet certain professional qualifications - ‘knowledge,’ ‘independent judgment and decision making,’ and ‘skills at making and explaining decisions’ - while also bringing diverse perspectives.”44 As per usual, these descriptors are painfully vague, and fail to explain the confusing creation of an external review board hand selected by the heads of Facebook, which, in effect, is merely a bloated extension of itself.


The Facebook Oversight Board was officially established toward the end of 2020 and quickly faced a major decision regarding the removal of former president Donald Trump following the events of January 6th.45 Facebook made the decision to first remove content that Trump had posted to their site during the incident. Later, Facebook decided to ban his account and access to the platform indefinitely.46 Whether you agree with the decision or not, the fact that Facebook removed the ability of the sitting president of the United States to voice his opinion on social media was a significant exercise of power. This instance was an early and indicative test for how the Oversight Board would be used. With the institution of the Oversight Board, Facebook gave itself an opportunity to make a controversial call and diffuse the backlash by claiming to merely be following the advice of an unbiased third party. Kate Klonick put it best, when describing what the Board would need to do to gain the trust of Facebook users, writing, “that process will not begin until users see Facebook make decisions recommended or mandated by the board that are in user’s interests but against the company’s immediate best interests.”47 The entire point of the Board’s creation is for the benefit of Facebook, as it provides another layer of protection from owning responsibility for their moderating decisions in addition to the sword and shield prongs of Section 230. Facebook is in an all-too powerful position now, as it is exercising editorial control while still enjoying the protection of a platform under the rules of Section 230. Vague writing of the law in Section 230 has made it so Facebook can act as a sort of super platform that is not culpable for third-party content, while also having the power to remove content they deem inappropriate. The Oversight Board only exacerbates this powerful control, as they provide cover for those making the decisions on content control.

Decentralized Power


An understandable reaction to the perceived abuses of companies like Facebook is to simply remove yourself from these services. However, the prospect of eschewing social media entirely is a challenging decision in our digital age, given the shortcomings of competitors. Enter the dark web. Robert W. Gehl conducted research in 2016 where he accessed the dark web social networks (DWSN) and gathered data as to how and why users aggregate in these spaces of the internet.48 The dark web consists of corners of the internet not accessible by traditional browsers like Google.49 The dark web is only accessible through downloaded Tor. browsers, which allow users to access sites with .onion domains as opposed to .coms, a cheeky reference to the layers of anonymity granted to those who access these sites.50 Complete anonymity of users is the core feature of communication via dark web .onion servers; and is encouraged by moderators of the social networks of the dark web.51  Gehl outlines the reasoning behind using the dark web here:


The central claim of this essay is that the DWSN is an experiment in power / freedom, an attempt to simultaneously trace, deploy, and overcome the historical conditions in which it finds itself: the generic constraints and affordances of social networking as they have been developed over the past decade by Facebook and Twitter, and the ideological constraints and affordances of public perceptions of the dark web, which hold that the dark web is useful for both taboo activities and freedom from state oppression.52


During the ten-months Gehl delved into the world of dark web social network sites, the number of users grew from 3,000 to 24,000 users, with 170 unique groups within his specific onion server.53 As he described, while some users may be attracted to the anonymity of the dark web in order to explore taboo content online, others may be trying to avoid censorship from the monopoly that sites like Facebook hold over digital communication. Thanks to famous cases, such as drug deals conducted over dark web sites like Silk Road, it is unlikely that the dark web will ever escape its reputation as a playground for the evil underbelly of the internet.54 It does, however, offer an example of how the frustration users feel being held hostage by the dictates of a few massive social media companies can result in decisive steps taken to avoid them altogether. However, the prospect of the Dark Web becoming a viable alternative to the billions of users who currently use Facebook is unlikely. If the hoops of downloading Tor. browsers to access .onion websites did not scare potential users, case studies of drug deals and criminals lurking on the Dark Web will. Those who wish to reach a mainstream audience with their messages are also out of luck, as 24,000 users is a drop in the bucket compared to billions who access Facebook daily.

Where Do We Go From Here?


The public is still left with the question as to how we can best remedy the ills of mainstream social media communication restriction. Nicholas Carr correctly points out the current dilemma of digital communication through social media, writing that there are “a few large companies, free to set their own rules, and wield control over public and private communications . . . Propaganda, lies, and defamatory attacks get broadcast to the masses, often by sources that can’t be traced.”55  Carr’s solution to the modern landscape of social media is to create a distinction between private communication and the content of “broadcasters.”56 This solution was drawn from his previous discussion surrounding the amateur radio broadcasters during the Titanic. Carr explains that “an Instagrammer with a hundred followers can be assumed to be engaged in conversation; an Instagrammer with a hundred thousand followers is a broadcaster.”57 Although, other commenters point out how this is a difficult distinction to make, one of them being Antón Barba-Kay, who produced a rebuttal to Nicholas Carr’s article. Barba-Kay claims making a distinction between the private and public interaction on social media misses the whole point, and that “social media thrives on confounding this distinction. The presumption that you are privately ‘sharing’ when you publicly post . . . is the whole point.58 Kay makes a salient point; who is to say whether the intention of the hypothetical 100 follower user Carr proposes is to keep their communication private or for it to go viral? Social media offers users a unique opportunity to customize the way they wish to engage. While users can simply consume social 

media passively without creating content, there are also those who post into the ether who can unexpectedly have their content broadcasted to millions.


Social media is here to stay, so the logical path forward is to address the legal parameters Facebook and similar sites operate under to create clear and replicable standards of which users and developers are aware. Reforms to Section 230 of the Communications Decency Act are required. Currently, the dual “sword and shield” prongs grant too much power to Facebook, allowing it to exercise the editorial hammer of a publisher while enjoying the protective shield of a neutral platform. It is unfair to permit Facebook and companies like it the privileges granted to platforms to avoid responsibility for third-party content while they retain the power of publishers to decide what is and is not allowed to be shared online. Ellen Goodman and Ryan Whittington’s summary of Section 230 offers two case studies of how proposals to alter the bill have proven effective. One of these examples involved Donald Trump finding bi-partisan support for the act to “Fight Online Sex Trafficking” which passed with 97-2 senatorial support in 2018.59 This instituted editorial control deeming that digital content aiding in sex trafficking would be removed from the realm of protection under Section 230. Any attempt to proliferate sex trafficking online would be required to be removed from social media sights, no debate needed. A similar proposal was made by Senator Joe Manchin who sought to include drug trafficking under a similar umbrella of the Sex Trafficking Act.60 These proposals outline specific content which would be pre-determined to be included as the kind of harmful content Mark Zuckerberg sought to define in his Georgetown Speech.61 Whether it is done by law, or explicitly added to Facebook policy, there needs to be less vague language instead of granting absolute discretion to the leaders of social media companies under Section 230. Another solution would require social media platforms like Facebook to choose a path, either as a platform or as a publisher. Platforms would only have the authority to strike down content which violates governmentally decided guidelines, all other third-party content will stay on the platform at the discretion of those who post it. While acting as a publisher grants companies further editorial control over what is moderated, it should remove their protection under Section 230 and make them responsible for the content on their sites. If we wish to continue to use and develop social media, clearer guidelines and rules of engagement must be established and reliably enforced. We cannot continue our current system of confusion and poorly defined roles.



Endnotes

1 Mark Zuckerberg, “Facebook CEO Mark Zuckerberg’s entire speech (at Georgetown University),” CNET Highlights, October 17 2019, YouTube video, 37:04, https://www.youtube.com/watch?v=nYMX-ArjYz8.

2 Zuckerberg, “Facebook CEO.”

3 Ibid.

4 Ibid.

5 Ibid.

6 Ibid.

7 Ibid.

8 Ibid.

9 Ellen P. Goodman and Ryan Whittington, “Section 230 of the Communications Decency Act and the Future of Online Speech,” The German Marshall Fund of the United States, no. 20 (August 2019): 1-15, https://www.jstor.org/stable/resrep21228.

10 Goodman and Whittington, “Section 230,” 2-3.

11 Ibid., 3.

12 Ibid.

13 Nicholas Carr, “How to Fix Social Media,” The New Atlantis, no. 66 (2021): 3–20, https://www.jstor.org/stable/27115502.

14 Carr, “How to Fix,” 9.

15 Ibid., 10.

16 Ibid., 3.

17 Ibid., 4.

18 Benjamin W. Cramer, “From Liability to Accountability: The Ethics of Citing Section 230 to Avoid the Obligations of Running a Social Media Platform,” Journal of Information Policy 10 (2020): 123–50, https://doi.org/10.5325/jinfopoli.10.2020.0123. 124.

19 Cramer, “From Liability to Accountability,” 124.

20 Ibid.

21 Goodman and Whittington, “Section 230,” 2–3.

22 Ibid., 2.

23 Jessica Taylor Piotrowski and Patti M. Valkenburg, “Social Media,” in Plugged In: How Media Attract and Affect Youth (New Haven: Yale University Press, 2017), 218–43, http://www.jstor.org/stable/j.ctt1n2tvjd.16.

24 Piotrowski and Valkenburg, “Social Media,” 220.

25 Zuckerberg, “Facebook CEO.”

26 Piotrowski and Valkenburg, “Social Media,” 221.

27 Antón Barba-Kay. “Destroy Social Media, or Be Destroyed by

It,” The New Atlantis, no. 67 (2022): 76–80,

https://www.jstor.org/stable/27115531. 77.

28 Goodman and Whittington, “Section 230,” 2-3.

29 Ibid., 2.

30 Ibid., 5.

31 Ibid., 3.

32 Zuckerberg, “Facebook CEO.”

33 Goodman and Whittington, “Section 230,” 5.

34 Cramer, “From Liability to Accountability,” 128.

35 Philip Di Salvo, “Leaking Black Boxes: Whistleblowing and Big

Tech Invisibility,” First Monday 27 (2022): 1-5.

36 Di Salvo, “Leaking Black Boxes,” 2.

37 Ibid., 4-5.

38 Ibid., 5.

39 Ibid., 2.

40 Kate Klonick, “Does Facebook’s Oversight Board Finally Solve the Problem of Online Speech?,” Models for Platform Governance (2019): 51-53, http://www.jstor.org/stable/resrep26127.11.

41 Klonick, “Does Facebook’s Oversight Board,” 51.

42 Ibid., 52.

43 Ibid., 51.

44 Ibid., 53.

45 Tatyana Bolton, Mary Brooks, Christ Riley, and Paul

Rosenzweig, “Donald Trump and the Facebook Oversight

Board,” R Street Institute, 2021,

http://www.jstor.org/stable/resrep30686. 1.

46 Bolton, Brooks, Riley, and Rosenzweig, “Donald Trump,” 1.

47 Klonick, “Does Facebook’s Oversight Board,” 53.

48 Robert W. Gehl, “Power/freedom on the Dark Web: A Digital

Ethnography of the Dark Web Social Network,” New Media &

Society 18(7) (2016): 1219–35,

https://doi.org/10.1177/1461444814554900.

49 Gehl, “Power/freedom,” 1220.

50 Ibid., 1222.

51 Ibid., 1220.

52 Ibid., 1219.

53 Ibid., 1222.

54 Ibid.

55 Carr, “How to Fix,” 16.

56 Ibid., 17.

57 Ibid.

58 Barba-Kay, “Destroy Social Media,” 77.

59 Goodman and Whittington, “Section 230,” 5.

60 Ibid., 7.

61 Zuckerberg, “Facebook CEO.”