Welcome‎ > ‎

Fool Me Once

Fool Me Once: The Case for Government Involvement in 

Regulation of “Fake News”and Other Online Advertising

Issues and Recommendations Summary

Abby K. Wood, Ann M. Ravel, Irina Dykhne[1]

 

 

Fake News is not “news”; it is native political advertising. Disinformation, spread under the guise of “news”, is particularly confusing to voters. Fake news, or, as we call it, “disinformation advertising”, undermines voter competence, or voters’ ability to make the choice that is right for them. Regulations to address it should aim to improve voter competence in three ways: (1) reduce the cognitive burden on the voter by reducing the amount of disinformation to which they’re exposed; (2) educate and nudge social media users in order to inoculate voters from the negative effects of the disinformation and teach them how to avoid unintentionally spreading it; and (3) improve transparency to facilitate speech that counters disinformation after it is spread, creating the possibility that voters will receive “corrected” information. The transparency improvements we recommend will conform disclosure requirements for online advertising to requirements for broadcast, cable, satellite, and radio political ads.               

 

Who regulates?

The division of responsibility between government and the platforms for regulation is a topic of some debate, but fairly clear lines can be drawn under the current First Amendment jurisprudence. Government can and should act to radically improve transparency. Whether the constitution permits government to require platforms to “nudge” social media users to opt not to view disinformation is less clear, but the reform itself is important, as we describe below.

The constitution limits possibilities for government regulations aimed at reducing the quantity of disinformation on the platforms themselves. The platforms should lead efforts to reduce the amount of disinformation their users see. Civil society should be vigilant and involved, because the platforms have been inconsistent in their efforts and willingness to act.

 

Important Features of the “Fake News” Problem

Disinformation advertising has key features that should guide our plans in how to reduce and combat it. It undermines our sense of a verifiable truth, it splinters the electorate with divisive messages, it has very little transparency attached, and its financing is difficult to trace.

There has been considerable interest in foreign involvement in disseminating disinformation on social media, but Americans located in the United States who wanted to create chaos and generate a profit placed some of the disinformation advertising. Regardless of the source, disinformation advertising is dangerous to our democracy. By weakening our sense that there is a verifiable truth, disinformation undermines political equality and trust in our institutions.

Moreover, the microtargeting that ad buyers use to place political advertising can fragment and splinter the electorate. This is a problem that goes beyond political polarization – by fracturing our electorate into countless small groups, each with its own set of agreed upon “truths”, microtargeting of disinformation renders democratic discourse extremely difficult.

Additionally, unlike political advertisers on radio or television, political advertisers online have very little regulation: transparency requirements are minimal and contain loopholes. One key distinction between television and radio political advertising and online advertising is that disclosures required for ads on television and radio reveal the ads’ targets, whereas online political ads do not. In a world with microtargeted ads subject to few disclosure or disclaimer requirements, small groups of social media users receive narrowly targeted ads, which may contain divisive or disinformative content, and which disappear. The group running the ad can avoid accountability entirely, because no one aside from the targeted people can see it. Due to the different treatment of targeting information, counter speech – the opportunity for the opposing side to speak to the same target audience – is available for television and radio political advertising, but not for online political advertising.

The money involved in online political advertising is more diffuse than ad buys on traditional media. Like traditional ads, some ads produced for the Internet have high production costs. Others, like memes, are free to create. Unlike television and radio ads, some online ads are placed for free. Posting an ad to one’s Facebook Page, or tweeting it into a politically active social network in hopes it goes viral, costs nothing. Advertisers might pay a platform to promote the ad and place it in certain users’ newsfeeds. They might also buy “likes”, “shares”, and “retweets” outside of the platforms, from “troll farms” and “sock puppets”, which are humans who create false profiles and boost content, or from “bot armies”, which are machines mimicking human behavior to boost content.

Finally, tracing the money involved in online political advertising is difficult, particularly when it comes from abroad. Our existing campaign finance enforcer, the Federal Election Commission, is not set up for complicated money tracing and data analysis. Any legislation that results from the current public interest in disinformation advertising should include an allocation of investigation and enforcement power to an agency that is better equipped to conduct the kinds of investigations that can trace money spent online and sometimes across borders, as well. The U.S. Treasury’s Financial Crimes Enforcement Network (FINCEN) is an agency that has the expertise and capabilities necessary to enforce restrictions on disinformation advertising, though it cannot enforce in this realm without congressional authorization to do so. Congress should either authorize it to act or model a new investigative body on FINCEN.

 

The Role of Government Regulation

Within the realm in which the constitution allows for government action, the government must act. The platforms – especially Facebook – are suddenly eager to demonstrate their willingness and ability to self-regulate, after more than a year of heel dragging. But government should regulate where, as here, market failures impose external costs that the market actors do not internalize. The costs of disinformation advertising fall upon the voters, reducing their ability to make the correct choice for themselves in the voting booth. And many features of disinformation advertising redound to the benefit of the platforms, both because they directly profit from selling the ads, and because the sock puppets and bots inflate the number of users, artificially growing the “audience” for other ad sales.

Government regulation is particularly powerful and useful when it solves informational deficits, as it does in the broader campaign finance regulatory framework. The government already has an existing framework for political ad transparency, though it has long needed an update. Part of the reason the framework needs updating relates to another reason that government should regulate: the platforms lack political will. They created loopholes in the online disclosure framework by securing exemptions to transparency requirements through FEC Advisory Opinions. The regulations we propose will improve transparency in several ways. First, they will facilitate coordination on and standardization of political advertising data. Coordinated, standardized data reporting is crucial for enabling civil society oversight and rapid counter-speech. Second, our existing transparency requirements for television and radio political advertising facilitate counter-speech, and online political ad transparency regulations will do the same. Finally, and importantly, government regulation signals to foreign meddlers and to voters that our government takes its responsibility to safeguard our democracy seriously.

 

Government regulations to improve transparency

Close the loophole for disclaimers in online advertising and update requirements. The FEC currently limits disclaimer requirements to “communications placed for a fee on another person’s website.” This must be expanded to include all communications that are either created for a fee, placed for a fee, or boosted for a fee, including advertisements that are boosted by bot armies, or liked, shared, or re-tweeted by trolls or sock puppets.

Updating disclaimer requirements will also require acknowledging that it is not impracticable to put a disclaimer on the ad in the newsfeed itself (or, less preferred, on the landing page when one clicks on an ad) and eliminating the current FEC exemption, which was written at a time when (1) landing page disclaimers were always an available option, and (2) Facebook ads (in particular) were 1/6 the size they currently are.

The current disclaimer-triggering categories of express advocacy (any time) or reference to a clearly identified candidate 30 (60) days before general (primary) election may also be outdated. Given the much longer timeframe that the disinformation ads are being placed, regulators should consider a longer electioneering communication window.

Current social science findings suggest that the enhanced disclaimer requirements, such as top-five donors to the group running the ad, are effective in reducing the effectiveness of negative campaign ads. Most disinformation advertising that mentions a candidate is negative. A top-five disclaimer requirement is therefore worth considering.


Create a Single Repository for all Versions of Online Ads. Government should require all platforms to save and post every version of every political ad placed online, whether placed “for a fee” or not. The ad should be placed in a dedicated repository, and all advertisers must provide a link to the repository on their home pages and social media pages. Simultaneously, platforms must also save every version of every political ad placed online, whether placed “for a fee” or not.

Each ad in the repository should contain the following information, most of which tracks the FCC requirements for the Political File:

  •  Entity that purchased the ad, with full contact information (including the name of the sponsor’s chief executive, if the sponsor is not a committee, group, or individual).
  • Every version of every ad, in its entirety.
  • When the ad ran (date and time of start and end of ad promotion).
  • Cost of the ad placement and promotion.    
  • The name of the candidate that the ad addresses or the election to which it is directed or the issue discussed.
  • The maximum number of direct impressions and engagements that could result from the ad.
  • Targeting criteria, or the “audience ID” created by the platform when a campaign or group provides a list of user names to target rather than using the platforms’ targeting criteria. The important thing is to enable counter speech. This important requirement deserves explanation. Different campaigns use different targeting methods. Smaller campaigns lack the data to generate their own lists of names and so use platforms’ targeting criteria. The more powerful players (parties and big campaigns) have rich internal data and can generate targeting lists using internal data, so they give the platforms a list of names of users to target rather than using platforms’ targeting criteria. Making the audiences available to future “speakers” is important, and audience identification enables counter-speech. It is not necessary for the actual list of names to be disclosed in the repository. The platforms should maintain the list of names for the audience of each ad internally (creating an ad-specific “audience ID”), to make the exact population of users target-able by other speakers who want to reach the same audience.

The ad data stored must be uniform in format across advertisers/platforms and over time, similar to the uniform format of online FEC filings. This will enable counter-speech to occur quickly. It will also ease data-savvy journalists’ efforts to report on the state of online political advertising.

Data should be stored long enough to enable post-election enforcement. The FCC requires stations to maintain the Political File for two years. We acknowledge the massive amount of data involved in online advertising, with tens of thousands of versions of ads run each day. Data storage is cheap, and we are confident that it is possible to maintain the repository for the duration of the campaign plus a post-campaign period of sufficient length to enable enforcement.

Which ads should be stored in the repository? Three easy-to-automate triggers for inclusion could help keep platform costs low: First, where an ad mentions a candidate or ballot proposition, it should be included. This tracks the current “electioneering communication” rule, though we believe all ads mentioning candidates should go in the repository, no matter when in the political cycle that they run. Second, when an ad uses targeting categories that are political in nature, it should go in the repository, even if it does not mention a candidate. Third, if advertising bought through a political ad sales team on the platform, it should go in the repository. We note that pure issue ads can be put in the repository, just as ads on “issues of national importance” are included in the Political File for television and radio. Although issue ads will be in the repository, they will not be subject to enforcement for lack of disclaimers. They are included solely to enhance the “marketplace of ideas” online, enabling counter speech where none is currently possible.

The Political File for television and radio advertising has no minimum amount of advertising spending associated with inclusion in the political file; any political ad purchase must be disclosed. We think this is wise. Mark Zuckerberg’s proposal that Facebook Pages that run political ads reveal all ads they are “currently” running, while flawed in its disaggregation and lack of ad retention, does one thing well: it has no minimum ad purchase amount that triggers public visibility of the ads. A current proposal in the senate contemplates a $10,000 aggregate spending trigger for inclusion in the repository, an amount that is entirely too high, given the incredible reach of online ads. Online advertising impressions cost less than a penny apiece, so a $10,000 aggregate allows groups to target a million voters with untraceable advertisements (including disinformation) before their ads are subject to public viewing. In small states or crucial districts, it is possible that “narrow casting” can swing an election. We suggest spending triggers be no higher than $250 in the aggregate. Moreover, we propose that the government require the platforms to post the ads to the repository, rather than the ad buyer. Platforms will find it easier to simply post all political ads in the repository, rather than attempting to keep running aggregate totals of spending for all groups purchasing ads through all ad-buying consultants across all platforms.

 

Eliminate Corporate Donor (Spender) Anonymity. Anonymous LLCs and 501(c) organizations are involved in funding outside spending, and we know many powerful political actors have been using LLCs and 501(c) spending to circumvent disclosure. Loopholes in the disclosure requirements for corporations mean that corporations can easily hide foreign spending connected to our elections. Because we know that disinformation arises in part from the Russian government and individuals in Eastern Europe, we must close the loophole for disclosure of corporate political expenditures in order to fully address the problem.


Platform self-regulation is important but insufficient

There are many other things the platforms themselves can do to help reduce the quantity of disinformation and help reduce the decay in voter competence caused by disinformation advertising. They can start with enforcing their terms of service and identifying and labeling disinformation advertising and those who spread it. Beyond that, there are good reasons to hope they will remove disinformation once it is labeled. Their users’ ability to make the right choice in the voting booth depends on their ability to avoid an onslaught of confusing disinformation.

Voting is a democratic act, and it is the government that must insure the integrity of our electoral process and work to prevent the erosion of voter competence from disinformation.



[1] Wood is Associate Professor of Law, Political Science, and Public Policy at University of Southern California (awood@law.usc.edu); Ravel is Lecturer in Law at Berkeley Law, and former Chair of the Federal Election Commission and California Fair Political Practices Commission; Dykhne is a J.D. candidate at the USC Gould School of Law. 

Comments