Coordinated Authentic Behavior

The Back Story


Background

Online disinformation campaigns are insidious, and stopping them can feel like playing a game of whack-a-mole. From efforts to undermine elections to the spread of erroneous health information, bad actors are able to promote disinformation through a scaled, network effect, often relying on the people most susceptible to this misinformation to continue its spread. Combined with the free, anonymous, and ubiquitous nature of the internet, disinformation tactics create information integrity challenges that no entity alone can defend against, especially given that this is a global issue with cultural nuances varying in parallel with the countries that experience it. In a world where disinformation is now everywhere, how can one make a dent in fighting such a challenging and complex issue?


To simplify the problem solving process, the options can be narrowed down into two broad categories: (1) Address it with broad, institutional change by governments and platform companies or (2) Empower individuals and resource-strapped organizations to do what’s within their capabilities. The vast majority of people that are affected by false information fall into the second category. They are the small civil society organizations working on the ground in crisis areas. They are the candidates in local offices trying to make a change in their town. They are the grassroots policy advisors looking to understand how to educate constituents or to get new policies adopted. They are all in need of support and not able to wait for these institutional changes to take place.


The goal of our project was to identify effective approaches, analyze them, and codify a set of best practices based on tactics that have worked in the field. We specifically focused on those emergent strategies that everyday people could use regardless of context or geographic location. We accomplished this by looking to understand what is already working in regions around the world and see where there were strategies that could be abstracted and presented for non-experts.

Research


To develop these tactics, we first identified numerous cases where disinformation was pervasive. Of these cases, a few compelling narratives came to the forefront such as:


  1. 2014 Ebola Outbreak in West Africa: UNICEF and WHO collaborated in conducting on the ground interviews and training community leaders to spread correct information and overcome false rumors disseminated by individuals, radio, and social media.

  2. 2017 French Election: A forewarned French civil society, armed with election-related laws, prevented disinformation from spreading before the election when a pro-Russian hack-and-leak operation took place.

  3. 2018 Irish Abortion Referendum: An Irish civic initiative and journalists shed light on deceptive political advertisements, containing disinformation and paid for by foreign anti-abortion groups targeting Irish voters.

  4. 2018 US Midterm Elections: A large, cross-functional effort across Government, Tech, the Media, and NGOs to rapidly identify, report, and shame disinformers while simultaneously attempting to ensure the integrity of our elections.


What these case studies share in common is that they “successfully” mitigated disinformation, with success broadly defined as decisive actions taken to remove or reduce exposure to disinformation. To organize what these cases had in common, we analyzed them through the following lenses:


  • Objective - High-level goal that the actor hopes to accomplish

  • Context - Unique variables that may influence the success of a particular objective

  • Tactics - Practical suggestions for someone looking to replicate prior successes towards a particular objective


Through our synthesis, we were struck by a central finding that in many successful situations, individual strategies and tactics did not work in isolation. Coordination -- where the actor interacted with multiple entities -- was front and center for all successful cases. Each use case where false information was limited in size or scope took coordination in some form between platforms, journalists, civil society (e.g., NGOs, individuals), and government. In all these featured case studies, at least three out of four types were involved.


Coordinated Authentic Behavior Framework

We combined these entities with the tactics abstracted from the case studies to create the Coordinated Authentic Behavior framework. This framework provides five courses of action that any person or group should consider when mitigating factual inaccuracies online or otherwise:


  1. Remove harmful content

  2. Limit the scope of disinformation

  3. Expose the disinformation

  4. Handle bad actors

  5. Build public awareness and resilience


Many of these steps can be worked on simultaneously; but in trying to provide the beginnings of a blueprint, these steps emerged as a sensible order of operations. The recommendations are by no means easy to achieve; however, each bucket and its tactics were suggested with our audience in mind to give agency to those with limited resources in fighting disinformation based on what we learned from the case studies.


Remove Harmful Content

Naturally, when one sees misinformation, the first instinct will be, “how do we get rid of this content?” As a result, one of the first tactics listed in the framework is to escalate harmful content to the media and other civil society groups to pressure platforms into removal. This tactic involves working with journalists as we saw in both the Irish Abortion Referendum and the 2018 US Elections. In the former, the Transparent Referendum Initiative (TRI), a volunteer-led civic initiative, formed an early partnership with the news agency Storyful and began training media professionals on open-source intelligence techniques. TRI and Storyful began to track disinformation narratives and publish their investigation methods and results. These stories were picked up by larger news outlets, especially Buzzfeed, forcing both Facebook and Google to make adjustments to their advertising policy under that scrutiny.


Limit The Scope of Disinformation

In tandem with removing bad information, it is important to also tackle it at scale. During the 2014-2016 Ebola outbreak NGOs like UNICEF and the WHO worked together to train village leaders in West Africa on the facts on Ebola. These leaders then used their social capital to circulate WHO-verified information to their villages. However, one does not have to be a large NGO in order to utilize this tactic; others can replicate the strategy used in this case study by recruiting credible influencers, whoever that may be in their community, to disseminate correct information. Influencers don’t have to be social media stars; they could be local community leaders, trusted religious institutions, or other individuals who represent the voice of their region. Their messages then limit the scope of false information by amplifying the correct information.


Expose the Disinformation

As mentioned earlier, the sources of disinformation are often hidden in the anonymous depths of the internet, making direct attribution difficult. Likewise, the tactics for this step vary alongside the spectrum of challenges that come with identifying and exposing the perpetrator. During the 2017 French election, the campaign of presidential candidate Emmanuel Macron closely monitored and reported attempts at hacking and spreading disinformation, both to platforms and journalists. While very specific circumstances enabled the success of this tactic, it could be replicated on a smaller scale by establishing a transparent communication process to keep civil society and the public aware of how the government is intervening in ongoing campaigns.


Another example of this tactic in practice was in the 2018 Midterm US Elections. In the lead up to this election, many platforms’ misinformation policies were in their nascent stages. In fact, most inauthentic behavior was enforced under spam, if at all. Therefore, unless the reported content was “verifiably false,” it would likely stay online. Recognizing the chasm between platform policy and the real-world, journalists from the New York Times and daily readers partnered to “name and shame” sources of disinformation across the web. This resulted in a live, on-going series of reporting on online mis/disinformation activities leading up to the 2018 elections.


Handle Bad Actors

Given the anonymity of the Internet, how can one hold disinformers accountable? Deplatforming - the process by which a platform company permanently bans a user from their service - is one tool institutions use to accomplish this goal. Though users are not typically consulted before a company will deplatform a bad actor, they can contribute to the body of evidence these companies use to make their decisions. In some cases, the government will also take action to exact punishment on propagators of disinformation, usually through the judicial system. Alex Jones, a right-wing conspiracy theorist, is known for promoting a number of harmful conspiracies online, chief among them that the tragedy at Sandy Hook Elementary was a hoax. The parents of Sandy Hook victims were able to appeal to the legal system: Jones was sued for defamation and ordered by the court to pay the plaintiff $100,000. While legal action is not an option for many victims of disinformation, depending on local laws, there can be legal recourse. Understanding if there are many victims of the same disinformation campaigns can also help people band together to fight it.


Build Public Awareness and Resilience

Ultimately, people or organizations fighting disinformation want to inoculate the public and make people immune to the harms of disinformation. This can be a daunting task, so we suggest pairing up with other organizations that have the same goal. During the Ebola outbreak, civil society organizations performed on the ground interviews and set up centers in each village in order to understand which erroneous beliefs were most important to address. With community buy-in, they were able to equip people to correct false information on various platforms and media as it appeared over time.

Next Steps

In line with our collaborative approach for fighting disinformation, this work would do best in partnership with an academic or non-profit organization. We believe that a grassroots approach to spread this framework and get it to the right people — those who don’t have the knowledge or resources to fight disinformation independently — would bolster their confidence in coordinating counter-efforts. It provides a guide on the entities available for collaboration and the objectives to consider in the short-term and long-term. And crucially, we support each recommended tactic with relevant case studies and context contained in an indexed, searchable database. We are hopeful that others will see the value in this collection and continue to add examples of techniques used to mitigate disinformation, creating a living document with comprehensive coverage of effective responses.