Identifying and Countering
FAKE NEWS
Mark Verstraete
1
, Derek E. Bambauer
2
, & Jane R. Bambauer
3
EXECUTIVE SUMMARY
Fake news has become a controversial, highly contested issue recently. But in the
public discourse, “fake news” is often used to refer to several different phenomena. The
lack of clarity around what exactly fake news is makes understanding the social harms that
it creates and crafting solutions to these harms difficult. This report adds clarity to these
discussions by identifying several distinct types of fake news: hoax, propaganda, trolling,
and satire. In classifying these different types of fake news, it identifies distinct features of
each type of fake news that can be targeted by regulation to shift their production and
dissemination.
This report introduces a visual matrix to organize different types of fake news and
show the ways in which they are related and distinct. The two defining features of different
types of fake news are 1) whether the author intends to deceive readers and 2) whether the
motivation for creating fake news is financial. These distinctions are a useful first step
towards crafting solutions that can target the pernicious forms of fake news (hoaxes and
propaganda) without chilling the production of socially valuable satire.
The report emphasizes that rigid distinctions between types of fake news may be
unworkable. Many authors produce fake news stories while holding different intentions
and motivations simultaneously. This creates definitional grey areas. For instance, a fake
news author can create a story as a response to both financial and political motives. Given
1
Fellow in Privacy and Free Speech, University of Arizona, James E. Rogers College of Law; Postdoctoral
Research Associate, University of Arizona, Center for Digital Society and Data Studies.
2
Professor of Law, University of Arizona, James E. Rogers College of Law; Affiliated Faculty, University of
Arizona, Center for Digital Society and Data Studies.
3
Professor of Law, University of Arizona, James E. Rogers College of Law; Affiliated Faculty, University of
Arizona, Center for Digital Society and Data Studies.
2
this, an instance of fake news may exist somewhere between hoax and propaganda,
embodying characteristics of both.
The report identifies several possible solutions based on changes to law, markets, code,
and norms. Each has advantages and disadvantages. Legal solutions to fake news are likely
to conflict with strong constitutional (First Amendment) and statutory (section 230 of the
Communications Decency Act) protections for speech. Market-based solutions are likely
to only reach a subset of fake news. Code solutions may be limited by the difficult
judgments required to distinguish satire from other types of fake news. Norms and other
community solutions hold promise but are difficult to create through political mechanisms.
Some types of fake news are more responsive to regulation than others. Hoaxes are
produced primarily in response to financial motivations, so solutions that remove (or
decrease) the profit from fake news stories are likely to reduce the number of hoaxes
created. By contrast, propaganda is produced primarily for non-financial motivations, so
changes in its profitability are unlikely to significantly reduce its output.
The report introduces several solutions that can serve as starting points for discussion
about the practical management of fake news, and networked public discourse more
generally. These starting points include: expanding legal protections for Internet platforms
to encourage them to pursue editorial functions; creating new platforms that do not rely on
online advertising; encouraging existing platforms to experiment with technical solutions to
identify and flag fake news; and encouraging platforms to use their own powerful voices to
criticize inaccurate information.
3
TABLE OF CONTENTS
Executive Summary
!.......................................................................................................!1
!
Table of Contents
!...........................................................................................................!3
!
Introduction
!......................................................................................................................!4
!
Part I. A Typology of Fake News
!...............................................................................!5
!
A.! Definitions!...........................................................................................................................!5!
B.! Typology!.............................................................................................................................!8!
Part II. Problems
!.............................................................................................................!9
!
A.! Mixed Intent!................................................................ .......................................................!9!
B.! Mixed Motives!.................................................................................................................!11!
C.! Mixed Information (Fact and Fiction)!...........................................................................!12!
Part III. Solutions
!..........................................................................................................!13
!
A.! Law!....................................................................................................................................!14!
B.! Markets!.............................................................................................................................!17!
C.! Architecture / Code!.........................................................................................................!18!
D.! Norms!...............................................................................................................................!20!
Part IV. A Way Forward
!...............................................................................................!21
!
A.! Law!....................................................................................................................................!21!
B.! Markets!.............................................................................................................................!24!
C.! Architecture / Code!.........................................................................................................!27!
D.! Norms!...............................................................................................................................!29!
Conclusion
!......................................................................................................................!32
!
4
INTRODUCTION
Fake news
4
has been the subject of constant discussion since commentators
suggested it played a critical role in the 2016 election results.
5
President Donald Trump
has fueled further discussion of fake news by invoking it in a variety of contexts from
discussions about unfavorable polling data to an epithet for CNN.
6
The term has been
used to refer to so many things that it seems to have lost its power to denote at all; as a
result, several media critics have recommended abandoning it entirely.
7
Although the
term “fake news” is confusing, it does point to several real threats to meaningful public
debate on the Internet.
This report maps the field of fake news and describes why proposed solutions have
been ineffective thus far. It offers recommendations for new approaches. In Part I, we
describe the several distinct phenomena that have been placed under the rubric “fake
news.” We introduce these problems in a matrix to show how they are related and how
regulatory solutions interact among them. Part II addresses some general problems
with reducing the influence of fake news. Part III surveys current regulatory
approaches while assessing which methods of constraint are best suited to deal with
particular species of fake news. We argue that applying single constraints in isolation to
solve fake news problems is often unwise and that propaganda—the most serious
threat from fake news—requires new thinking to solve. Part IV introduces a set of
model reforms that can ameliorate fake news problems, and evaluates the costs and
benefits each one poses.
4
In this report, we use “fake news” as a general catchall term to describe a series of phenomena: hoaxes,
satire, propaganda, and trolling. Our conclusions do not turn on our choice of terms for species for fake
news. It makes no difference for our descriptions and prescriptions if “bias” is considered a species of
“propaganda” or a stand-alone concept. Our hope is that people will not focus on the definitions but instead
use our matrix as a tool to see how regulatory decisions will impact multiple categories of fake news.
5
Olivia Solon, Facebook’s failure: did fake news and polarized politics get Trump elected?, THE GUARDIAN (Nov. 10,
2016), https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election-conspiracy-
theories.
6
Callum Borchers, ‘Fake News’ Has Now Lost All of Its Meaning, WASH. POST (Feb 9, 2017),
https://www.washingtonpost.com/news/the-fix/wp/2017/02/09/fake-news-has-now-lost-all-
meaning/?utm_term=.9a908273d3d8.
7
Id.
5
PART I. A TYPOLOGY OF FAKE NEWS
A. Definitions
In this section, we enumerate the species of fake news while pointing out relevant
features that can be leveraged to encourage or discourage their production and
dissemination. Specifying different categories of fake news based on their content,
motivation, and intention supplies a useful framing strategy for discussions.
We define satire as a news story that has purposefully false
8
content, is financially
motivated, and is not intended by its author to deceive readers. A paradigmatic example
of satire is The Onion.
9
The Onion presents factually untrue stories as a vehicle for
critiques or commentaries about society. For example, a recent article treats the issues
of opioid addiction and prescription drug abuse, with the headline “OxyContin Maker
Criticized For New ‘It Gets You High’ Campaign.”
10
Writers for The Onion do not seek
to deceive readers into believing the story’s content.
Scott Dickers, founder of The Onion, expressed this
point when he said that if anyone is fooled by an
Onion piece, it is “by accident.”
11
Typically, people who take Onion stories at face
value have little experience with U.S. media norms.
For example, Iranian state media reported as fact an
Onion article claiming that Iranian Prime Minister
Mahmoud Ahmadinejad was more popular with rural U.S. voters than President Barack
Obama.
12
When people take an Onion article as true, they often miss the underlying
commentary, which is the raison d’etre for the article.
A hoax is a news story that has purposefully false content, is financially motivated,
and is intended by its author to deceive readers.
8
“False” can refer to either the content of the story being untrue, such as in the humor publication The Onion,
or the presentation of a true story that satirizes the delivery and performance of traditional news sources,
such as on the cable television program The Colbert Report.
9
See generally http://www.theonion.com.
10
http://www.theonion.com/article/oxycontin-maker-criticized-new-it-gets-you-high-ca-56373 (July 10,
2017).
11
Ben Hutchinson, ‘The Onion’ Founder: we do satire not fake news, WISN-TV (Feb. 15, 2017),
http://www.wisn.com/article/the-onion-founder-we-do-satire-not-fake-news/8940879 (implying that writers
at The Onion do not intend to deceive readers).
12
Kevin Fallon, Fooled by ‘The Onion’: 9 Most Embarrassing Fails, THE DAILY BEAST (Nov. 27, 2012),
http://www.thedailybeast.com/articles/2012/09/29/fooled-by-the-onion-8-most-embarrassing-fails.html.
Satire has purposefully
false content, is financially
or culturally motivated, and
is not intended by its
creator to deceive readers.
6
Clear examples of hoaxes include the false
stories created by Macedonian teenagers about
Donald Trump to gain clicks, likes, shares, and
finally profit. In a Buzzfeed report, these teenagers
said “they don’t care about Donald Trump”;
Buzzfeed characterized their fake news mills as
merely “responding to straightforward economic
incentives.”
13
These Eastern European teens do not have political or cultural
motivations that drive the production of their fake news stories.
14
They are simply
exploiting the economic structures of the digital media ecosystem to create intentionally
deceptive news stories for financial reward.
Propaganda is news or information that has purposefully biased or false content,
is motivated by an attempt to promote a political cause or point of view, and is
intended by its author to deceive the reader.
15
The controversy surrounding Hillary Clinton’s health leading up to the 2016
election is a recent example of propaganda.
16
The controversy started when a 2016
YouTube video was artfully edited to piece
together the most disparaging images of Secretary
Clinton coughing.
17
The story was reposted and
amplified by people with a political agenda.
18
And
the controversy reached critical mass when it
appeared Clinton had fainted.
19
The story was not
entirely fiction—Clinton in fact had pneumonia—
but the story was deceptively presented to
propagate a narrative about her long-term health
13
Craig Silverman and Lawrence Alexander, How Teens in the Balkans Are Duping Trump Supporters with Fake
News, BUZZFEED (Nov. 3, 2016), https://www.buzzfeed.com/craigsilverman/how-macedonia-became-a-
global-hub-for-pro-trump-misinfo?utm_term=.yhjkjAaVk#.lkmk06Zvk.
14
See Robyn Caplan, How do you deal with a problem like fake news?, POINTS (Jan. 5, 2017),
https://points.datasociety.net/how-do-you-deal-with-a-problem-like-fake-news-80f9987988a9 (labeling sites
built by Macedonian teens as a “black and white” case of fake news).
15
Gilad Lotan, Fake News Is Not the Only Problem, POINTS (Nov. 22, 2016),
https://points.datasociety.net/fake-news-is-not-the-problem-f00ec8cdfcb#.8r92obruo (offering a very
similar definition of propaganda as “Biased informationmisleading in nature, typically used to promote or
publicize a particular political cause or point of view”).
16
Id.
17
Id.
18
We can never be certain about what motivates behavior (discussed below) but it seems reasonable to
suggest this was in large part politically motivated.
19
Lotan, Fake News Is Not the Only Problem.
A hoax has purposefully
false content, is financially
motivated, and is intended
by its creator to deceive.
Propaganda has purposefully
biased or false content, is
motivated by an attempt to
promote a political cause or
point of view, and is
intended by its creator to
deceive.
7
and influence political results.
Trolling is presenting news or information that has biased or fake content, is
motivated by an attempt to get personal humor value (the lulz)
20
, and is intended by its
author to deceive the reader.
21
One example that captures the spirit of trolling is called Jenkem.
22
The term
“Jenkem” first appeared in a BBC news article that described youth in Africa inhaling
bottles of fermented human waste in search of a
high.
23
At some point, “Jenkem” started
appearing in Internet forums as a punchline or
conversation stopper.
24
In the online forum
Totse, a user called Pickwick uploaded pictures of
himself inhaling fumes from a bottle labeled
“Jenkem.”
25
The story made its way to 4chan--
another online forum--where users posted the
images and created a form template to send e-
mails to school principals, with the goal of tricking them into thinking that a Jenkem
epidemic was sweeping through their schools. The form letter was written to present
the perspective of a concerned parent who wanted to remain anonymous to avoid
incriminating her child, but also wanted to inform the principal about rampant Jenkem
use among the student body. Members of 4chan forwarded the fake letter widely, and
the story (or non-story) was eventually picked up by a sheriff’s department in Florida;
later, several local Fox affiliates ran specials on the Jenkem epidemic.
26
20
See “Lulz,” OXFORD ENGLISH DICTIONARY ONLINE, https://en.oxforddictionaries.com/definition/lulz
(defining term as fun, laughter, or amusement, especially when derived at another’s expense).
21
The nature of the deception may vary. Some trolling authors do not intend to deceive readers about the
story’s content, but to agitate readers through deception about the author’s own authenticity or beliefs.
22
WHITNEY PHILLIPS, THIS IS WHY WE CANT HAVE NICE THINGS: MAPPING THE RELATIONSHIP
BETWEEN ONLINE TROLLING AND POPULAR CULTURE 4 (2015).
23
Id. at 5
24
Id.
25
Id.
26
When the story was picked up by the sheriff’s department, Pickwick distanced himself from it and admitted
that the images were fake. Without Pickwick, users forwarded the letter--knowing it was false--in an attempt
to deceive school administrators and create a false news story that they found humorous.
Trolling has biased or fake
content, is motivated by an
attempt to get personal
humor value, and is intended
by its creator to deceive.
8
B. Typology
27
This section provides a new way of organizing different types of fake news
according to their distinctive attributes. The two defining characteristics used to identify
species of fake news are (1) whether the author intends to deceive readers and (2)
whether the payoff from fake news is motivated by financial interests or not.
Intent
Deceive
28
Not Deceive
Financial
Not
Financial
Hoax
Example: Macedonian Teenagers
Satire
Example: The Onion
Propaganda
Example: Controversy Re: Hillary
Clinton’s Health
Trolling (Lulz)
Example: Jenkem Episode
Humor
Example: Twitter parody accounts
29
27
The matrix is not intended to imply that deception and financial motivations are binary states; these can
admit of degrees or exist on a spectrum. The next section details this more thoroughly.
28
In the context of fake news, there are two distinct ways that someone can intend to deceive: (1) by
presenting false information in a way designed to trick readers into thinking it is true (this is usually the case
with hoax sites), or (2) by presenting stories in a deliberately misleading way, or by omitting context to
manipulate readers into reaching conclusions that may not be justified by the full story (a form of deception
usually indicative of propaganda).
29
See, e.g., Plaid Vladimir Putin, https://twitter.com/Plaid_Putin; Donald J. Trump,
https://twitter.com/realDonJTrumph; Bigfoot TheBigfoot, https://twitter.com/hellobigfoot.
Payoff
9
These distinctions are useful for several
reasons. Isolating intent to deceive provides
a way to distinguish between types of fake
news along moral lines: intentionally
deceiving readers is blameworthy.
Identifying different characteristics of fake
news also offers a roadmap for which
solutions will address various types of fake
news.
PART II. PROBLEMS
In this Part, we analyze difficulties in making determinations about where a specific
instance of fake news falls on our matrix. Additionally, this discussion explores why
most fake news embodies characteristics of several species or—as others have
mentioned—exists in a gray area.
30
A. Mixed Intent
Understanding the intentions that undergird a certain act is difficult, if not
impossible. Most theories of intent conceptualize it as a private mental state that
motivates action.
31
Because we cannot measure directly other people’s thoughts,
understanding intentions is often left to guesswork or to proxies. The law recognizes
this difficulty and in many cases distinguishes between subjective and objective intent.
Subjective intent is the actual mental state of the person acting, as experienced by that
actor.
32
This differs from objective intent, which considers the outward manifestations
of intent and then determines how a reasonable person would understand the actor’s
intentions based on them.
This difficulty has not been a total barrier for federal regulations that hinge on
determinations about intent. Take, for instance, the Federal Food, Drug, and Cosmetic
Act (FDCA), which brings products under the purview of the Food and Drug
Administration (FDA) if they are intended to be used as food or drug products.
33
30
Caplan, How do you deal with a problem like fake news?.
31
MODEL PENAL CODE § 2.02(2) (1962).
32
Instances of subjective intent in the law include tort doctrine, where an act can result, or not result, in
liability depending upon the actor’s subjective knowledge and goals. DAN DOBBS ET AL., THE LAW OF TORTS
§ 29 (2d ed. 2011).
33
See Christopher Robertson, When Truth Cannot Be Presumed: The Regulation of Drug Promotion Under an
Expanding First Amendment, 94 B.U. L. REV. 545, 547 (2014).
The two defining characteristics used
to identify species of fake news are
(1) whether the author intends to
deceive readers and (2) whether the
payoff from fake news is motivated by
financial interests or not.
10
Similarly, a statute criminalizes possession of “a hollow piece of glass with a bowl on
the end...only if it is intended to be used for illicit activities.”
34
And the Federal Aviation
Authority (FAA) only regulates vehicles that are intended for flight.
35
Although many federal regulations are structured around identifying intent, this is
still a complication for determinations about fake news Web sites. For instance, Paul
Hornerwho has been dubbed
36
the impresario of
fake news by the Washington Post—runs a website that
publishes news stories that are untrue and uses a
mark that closely resembles that of CNN.
37
Horner
considers himself a satirist and other commentators
claim that the site is “clearly satire,”
38
yet the close
similarity between the real CNN and Horner’s
version often fools people into viewing the site as
disseminating true information.
39
In our matrix, the distinction between hoax and satire turns on whether the author
intended to deceive the audience into thinking that the information is true. Making
sound determinations about authorial intent is important because potential solutions
should not sweep up satire in an attempt to filter out hoaxes.
40
In crafting solutions,
regulators will likely have to decide between assessing the format and content of the
article to estimate whether the author intended to deceive (objective intent) or inquiring
into whether the author actually intends to deceive or not (subjective intent). Both
involve challenging subjective decisions, though ones that are also trans-substantive
(occurring across multiple areas of law). Such determinations about intent are fact-
34
21 U.S.C. § 863 (2012) (defining “drug paraphernalia” as “any equipment…which is primarily intended or
designed for…introducing into the body a controlled substance) (emphasis added); see id. However, some
commentators suggest that this regulatory scheme may unconstitutionally burden speech. See Jane R.
Bambauer, Snake Oil, WASH L. REV. (forthcoming 2017).
35
14 C.F.R. § 1.1 (2013) (emphasis added); see Robertson, id.
36
Caitlin Dewey, Facebook fake-news writer: ‘I think Donald Trump is in the White House because of me’, WASH. POST
(Nov. 17, 2016), https://www.washingtonpost.com/news/the-intersect/wp/2016/11/17/facebook-fake-
news-writer-i-think-donald-trump-is-in-the-white-house-because-of-me/?utm_term=.7df13ab99187.
37
See cnn.com.de (Paul Horner’s Web site).
38
Sophia A. McClennen, All “Fake News Is Not EqualBut Smart or Dumb It All Grows from the Same Root,
SALON (Dec. 11, 2016), http://www.salon.com/2016/12/11/all-fake-news-is-not-equal-but-smart-or-dumb-
it-all-grows-from-the-same-root/.
39
A Buzzfeed article characterized Paul Horner’s site asmeant to fool,” which would make it more
representative of a hoax and not satire under our analysis. See Ishmael N. Daro, How A Prankster Convinced
People The Amish Would Win Trump The Election, BUZZFEED (Oct. 28, 2016),
https://www.buzzfeed.com/ishmaeldaro/paul-horner-amish-trump-vote-hoax.
40
This assumes that most people find value in satirical news and want it preserved. We think this is
uncontroversial.
Potential solutions should
not sweep up satire in an
attempt to filter out hoaxes.
11
specific and complicated. Disclaimers about a site publishing false news stories are
often buried in fine print at the bottom of the page, and some fake news stories reveal
themselves to be fake in the article itself, which can be a problem in a media culture
where many people do not read past the headlines.
41
The uncritical consumption of fake news divides responsibility among several
actors: authors (who intend to deceive), platforms (that are optimized to promote
superficial engagement by readers)
42
, and, finally,
readers themselves (who often do not engage with
an article beyond the headlines). Although there is
shared responsibility, it is futile to place a
significant share of the burden to solve fake news
on readers. Readers operate in digital media
ecosystems that incentivize low-level engagement
with news stories, and digital platforms are crucial
tools for the circulation of intentionally deceptive
species of fake news. Efforts to educate readers to become more sophisticated
consumers of information are laudable but likely to have only marginal effects. Thus,
solutions must center on platforms and authors because they will be more responsive
to interventions than readers.
B. Mixed Motives
The problem of mixed motives involves two connected difficulties: one epistemic
and one administrative. The epistemic problem of mixed motives is similar to the
problem of deciphering intent in that it grows out of the inherent ambiguity of
interpreting a person’s actions. In short, the epistemic problem of mixed motives is that
people act for a variety of reasons: actions driven by different reasons can sometimes
produce the same results, so with access only to people’s actions (the results), it can be
difficult to comprehend the motivations behind them. This complicates classifications
based on motivations for acting.
41
Leonid Bershidsky, Fake News is all about False Incentives, BLOOMBERG (Nov. 16, 2016),
https://www.bloomberg.com/view/articles/2016-11-16/fake-news-is-all-about-false-incentives (describing
how many people do not engage with stories beyond the headlines).
42
Brett Frischmann & Evan Selinger, Why it’s dangerous to outsource our critical thinking to computers, THE
GUARDIAN (Dec. 10, 2016), https://www.theguardian.com/technology/2016/dec/10/google-facebook-
critical-thinking-computers (“The engineered environments of Facebook, Google, and the rest have
increasingly discouraged us from engaging in an intellectually meaningful way. We, the masses, aren’t stupid
or lazy when we believe fake news; we’re primed to continue believing what we’re led to believe.”).
It is futile to place a
significant share of the
burden to solve fake news
on readers.
12
The debunked Pizzagate story illustrates the problem of mixed motives. Users on
4chan and Reddit promulgated the theory—Pizzagate—that members of the
Democratic Party leadership were involved in a child sex trafficking ring operating
from a Washington, D.C. pizza restaurant.
43
One conspiracy theorist entered the
restaurant armed with an assault rifle and a handgun, firing several rounds during a
(fruitless) search for tunnels or hidden rooms that he believed were being used in child
trafficking.
44
Assessing the Pizzagate events, Caroline Jack shows that people
participate in online discussions for a wide variety of reasons; participation in Pizzagate
could have been motivated by genuine concern, play, boredom, politics, or any
combination of those.
45
The administrative problem of mixed motives is that because any single instance of
fake news may have several motivating factors,
interventions that target a single motivating
factor—so that only paradigmatic cases of
propaganda or a hoax are within their scope—
may be unsuccessful.
46
For example, a person
could produce a fake news story that was
motivated by both financial considerations and
political ones. Even if financial motivations were
the primary purpose for creating the story, the story might have been produced without
the financial incentives if the political reasons were sufficient on their own.
Accordingly, an intervention targeted at pecuniary motives may not be enough. The
problem of multiple sufficient motives shows that although regulating motives may be
a tempting starting point, it is likely an insufficient fix on its own.
C. Mixed Information (Fact and Fiction)
The problem of mixed information is that true and false information coexist in fake
news narratives and on news platforms. Consider the propaganda narrative about
Hillary Clinton’s health during her 2016 presidential campaign. The narrative mixed fact
and fiction in a way that made it both hard to check facts and, by extension, difficult to
43
Caroline Jack, What’s Propaganda Got To Do With It?, POINTS (Jan. 5, 2017),
https://points.datasociety.net/whats-propaganda-got-to-do-with-it-5b88d78c3282#.uj7xfxed0.
44
Marc Fisher, John Woodrow Cox, & Peter Hermann, Pizzagate: From rumor, to hashtag, to gunfire in D.C.,
WASH. POST (Dec. 6, 2016), https://www.washingtonpost.com/local/pizzagate-from-rumor-to-hashtag-to-
gunfire-in-dc/2016/12/06/4c7def50-bbd4-11e6-94ac-3d324840106c_story.html.
45
Jack, What’s Propaganda Got To Do With It?
46
An example of this would be hoaxes that are exclusively based on financial motivations (for example, those
of the Macedonian teenagers).
Because fake news may
have several motivating
factors, interventions that
target only one may be
unsuccessful.
13
debunk the claim that Clinton had serious long-term
health issues that made her unfit to be President. It
was true that Hillary Clinton had a health issue: she
was battling pneumonia. It was false, however, that
she had serious long-term health issues that affected
her fitness for the presidency. In particular,
propaganda mixes fact and fiction to create
narratives that have staying power because some of the narrative elements are true, yet
the story is presented in a way that is misleading and not true.
Another location for mixing fact and fiction is on platforms themselves, which may
have propaganda interwoven with one-sided news reports. A single resource may
display or blend truth and lies side by side. One example of this phenomenon is the
Web site Breitbart, which, according to Ethan Zuckerman, “mix[es] propaganda and
conspiracy theories with highly partisan news.”
47
Breitbart and other similar platforms
convincingly blend propaganda with partisan (yet largely true) news stories. Mixed
information on platforms makes it difficult to discern which stories are partisan
interpretation of actual events and which narratives have moved beyond reflecting
actual events to promote false or misleading accounts.
PART III. SOLUTIONS
In this Part, we identify four ways to constrain behavior and assess which, if any,
are good choices for stemming fake news. One category of fake news—hoaxes
responds particularly well to market-based constraints. However, as recent research has
suggested, this species of fake news may have minimal impact on the media ecosystem
relative to other species; it is significantly less influential than propaganda.
48
In
sketching the different modes of constraining behavior, we assess recent attempts to
leverage these techniques to stem the tide of fake news. We highlight why
propaganda—arguably the biggest problem emanating from fake news—seems to elude
all of these methods.
47
Ethan Zuckerman, The Case for a Taxpayer-Supported Version of Facebook, THE ATLANTIC (May 7, 2017),
https://www.theatlantic.com/technology/archive/2017/05/the-case-for-a-taxpayer-supported-version-of-
facebook/524037/.
48
Yochai Benkler et al., Study: Breitbard-led Media Ecosystem Altered Broader Media Agenda, COLUMBIA
JOURNALISM REVIEW (Mar. 3, 2017), http://www.cjr.org/analysis/breitbart-media-trump-harvard-study.php.
True and false information
coexist in fake news
narratives and on news
platforms.
14
Larry Lessig identified four modes that constrain behavior: law (state-sponsored
sanctions), markets (price mechanisms), architecture (such as code), and norms
(community standards).
49
We assess the capabilities of each method to counter fake
news.
A. Law
Law operates through the threat of sanctions from the state.
50
One reason that
some commentators disfavor state solutions is that they are monopolistic and
mandatory.
51
On this account, state solutions are undesirable because they do not leave
room to experiment with different mechanisms to solve a problem; however, this
criticism is largely true for private solutions by Internet platforms as well.
52
With high
switching costs due to network effects, Facebook, Google, and other similarly situated
platforms can implement private ordering that is subject to similar criticisms about the
monopolistic effects of regulation.
53
A more trenchant criticism of a legal
approach to fake news is that speech
regulations backed by state enforcement are
likely to run afoul of the First Amendment.
Although there are specific carve-outs for
speech that are not subject to First
Amendment protection, criminal and civil
lawsuits under these causes of action are likely
to have only a minor effect on the robust fake news ecosystem.
54
49
Lawrence Lessig, The New Chicago School, 27 J. LEGAL STUD. 661 (1998); see also LAWRENCE LESSIG. CODE
2.0 (2006).
50
Robert Cover, Violence and the Word, 95 YALE L.J. 1601 (1985).
51
See, e.g., Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. CHI. LEGAL F. 207, 215-16
(arguing “Error in legislation is common, and never more so than when the technology is galloping forward.
Let us not struggle to match an imperfect legal system to an evolving world that we understand poorly. Let us
instead do what is essential to permit the participants in this evolving world to make their own decisions.”).
52
This reasoning is reflected in the idea that states should be “laboratories for democracy,where solutions
to social issues can be vetted and the best ones identified. See New State Ice Co. v. Liebmann, 285 U.S. 262,
311 (1932) (Brandeis, J., dissenting) (arguing that “a single courageous State may, if its citizens choose, serve
as a laboratory; and try novel social and economic experiments without risk to the rest of the country”).
53
Barbara Engels, Data Portability among Platforms, 5 INTERNET POLICY REVIEW (2016),
https://policyreview.info/articles/analysis/data-portability-among-online-platforms (discussing the “lock-in
effect” that makes switching costs high when personal data is not portable across platforms).
54
See U.S. v. Stevens, 559 U.S. 460, 468-69 (2014) (listing categories of speech that can be regulated without
triggering First Amendment scrutiny).
Four modes constrain behavior:
law (state-sponsored sanctions),
markets (price mechanisms),
architecture (such as code), and
norms (community standards).
15
One example of speech that is specifically removed from First Amendment
protection is defamation. Defamation—making false statements about another that
damage their reputation—is not protected, and, on the surface, seems like it could be
effectively applied as a cause of action to remedy fake news.
55
However, this may not
be effective in clearing up fake news that references public figures. In order for a public
figure to succeed in a defamation claim, the person must prove that the writer or
publisher acted with actual malice (knowledge of the falsity of the information, or
reckless disregard as to falsity), which is exceptionally difficult.
56
Even private figures
must establish some fault on the part of the author, distributor, or publisher, even if
only negligence in assessing whether information is false.
57
Beyond the standard speech-based causes of action, a few commentators have
suggested new legal tools to combat fake news. MSNBC’s chief legal correspondent has
proposed that the Federal Trade Commission (FTC)
regulate fake news under its statutory authority
58
, which
allows the FTC to police “unfair or deceptive acts or
practices in or affecting commerce.”
59
For the FTC to gain
a solid basis for regulation, it would have to make the
difficult argument that fake news is a commercial product
even though people are often not paying to read it.
60
David
Vladeck, a former director of the FTC’s Bureau of
Consumer Protection, says that it is unlikely that the FTC could make compelling
arguments about the commercial nature of fake news, even in paradigmatic cases like
the hoaxes perpetuated by Macedonian teenagers for financial gain.
61
A second solution, offered by Noah Feldman, attempts to build on the defamation
exception to First Amendment protection.
62
Under this scheme, Congress would create
a private right to delist libelous statements from the Internet.
63
To protect against
55
What Legal Recourse Do Victims of Fake News Stories Have?, NPR (Dec. 7, 2016),
http://www.npr.org/2016/12/07/504723649/what-legal-recourse-do-victims-of-fake-news-stories-have.
56
N.Y. Times v. Sullivan, 376 U.S. 254 (1964).
57
See Gertz v. Robert Welch, Inc., 418 U.S. 323, 347 (1974) (holding “so long as they do not impose liability
without fault, the States may define for themselves the appropriate standard of liability for a publisher or
broadcaster of defamatory falsehood injurious to a private individual”).
58
Callum Borchers, How the Federal Trade Commission could (maybe) crack down on fake news, WASH. POST (Jan. 30,
2017), https://www.washingtonpost.com/news/the-fix/wp/2017/01/30/how-the-federal-trade-
commission-could-maybe-crack-down-on-fake-news/?utm_term=.ce40f260d732.
59
15 U.S.C § 45(a)(1).
60
Borchers, How the Federal Trade Commission could (maybe) crack down on fake news.
61
Id.
62
Noah Feldman, Closing the Safe Harbor for Libelous Fake News, BLOOMBERG VIEW (Dec. 16, 2016),
https://www.bloomberg.com/view/articles/2016-12-16/free-speech-libel-and-the-truth-after-pizzagate.
63
Id.
Speech regulations
backed by state
enforcement are
likely to run afoul of
the First Amendment.
16
people abusing this removal power, the regime would require that parties adjudicate
whether the statements were false and defamatory and then have the court direct a
removal order to search engines or other Internet platforms.
64
There are reasons to think that this solution may overly threaten speech that
deserves protection. First, as Feldman notes, this would require changing existing laws
that insulate Internet publishers from liability arising
from hosting the speech of others.
65
Laws that protect
intermediaries from liability promote free exchange and
robust public debate on the Internet.
66
The specter of
fake news, although a real threat, is not severe enough
to merit stripping protections from Internet
intermediaries. If anything, removing shields from
liability may be a bigger threat to democratic debate
than fake news itself.
67
Second, even if Congress stripped liability from speech aggregators, hosts of third
party speech still have their own First Amendment rights that cannot be abridged based
on a “trial like hearing” where they are not involved.
68
Confining judicial proceedings to
the allegedly defamed party and the original speaker improperly curtails the First
Amendment rights of content hosts, who—like publishers of traditional media—are
entitled to seek to vindicate their rights before having a court direct removal orders at
their platform.
69
64
Id.
65
The strongest statutory shield from liability for Internet intermediaries is 47 U.S.C. § 230 (1996) (section
230 of the Communications Decency Act), which insulates publishers and distributors from most civil
liability for hosting third-party content.
66
See Derek E. Bambauer, Against Jawboning, 100 MINN. L. REV. 51 (2015).
67
Revenge pornand some varieties of cyber harassmentare cases where the threat may be severe enough
to consider imposing liability on parties that are hosts of third party content. Even then, liability should be
framed as narrowly as possible and not, for example, extend to Google for listing links to revenge porn
websites. See Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 WAKE FOREST L. REV.
345 (2014); but see Derek E. Bambauer, Exposed, 98 MINN. L. REV. 2025 (2014).
68
Under the prevailing view, search results are protected by the First Amendment. See Eugene Volokh &
Donald M. Falk, First Amendment Protection for Search Engine ResultsA White Paper Commissioned by Google
(2012), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2055364; Jane R. Bambauer, Is Data Speech?,
66 STAN. L. REV. 57, 60 (2014); Derek E. Bambauer, Copyright = Speech, 65 EMORY L.J. 199 (2015); but see Tim
Wu, Machine Speech, 161 U. PA. L. REV. 1495, 149698 (2013); Oren Bracha & Frank Pasquale, Federal Search
Commission Access, Fairness, and Accountability in the Law of Search, 93 CORNELL L. REV. 1149 (2008); James
Grimmelman, Speech Engines, 98 MINN. L. REV. 868 (2014); Heather M. Whitney & Mark Robert Simpson,
Search Engines, Free Speech Coverage, and the Limits of Analogical Reasoning,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2928172 (arguing that not all search engine results
should be constitutionally protected).
69
Brief of Amici Curiae First Amendment and Internet Law Scholars in Support of Appellant, Yelp, Inc., Hassell v. Bird,
No. S235968 (Ca. 2017), available at
http://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?filename=3&article=2463&context=historical&type
Removing shields from
liability may be a
bigger threat to
democratic debate
than fake news itself.
17
To sum up, legal solutions are likely to be over-inclusive and threaten flourishing,
robust public debate on the Internet to a greater degree than fake news imperils it.
Even if legal solutions seemed like an effective tool to combat fake news, administering
new legal remedies will be difficult given the strength of constitutionally guaranteed
speech protections. Finally, propaganda relies on mixing truth and falsehood to
promote a narrative; it is unlikely that legal solutions, which rely on the ability to prove
statements are untrue, will be effective in restraining the production and dissemination
of propaganda.
B. Markets
Markets regulate through changes in price that, in turn, determine which activities
and goals people pursue. Market-based solutions can occur naturally as the result of
changes in supply or demand, or they can be intentionally created when governments
intervene in markets to promote or discourage certain
economic activity through subsidies or taxes.
70
The
underlying logic (or driving mechanism) of regulation
through markets is that people respond to financial
incentives.
In the wake of the 2016 U.S. presidential election, Google announced that it would
ban Web sites that publish fake news articles from using its advertising platform.
71
Google’s decision involved AdSense, which allows Web sites to profit from third-party
ads hosted on their sites.
72
Google’s decision to restrict access to AdSense undercut the
funding model that many fake news sites leverage to make a profit.
73
By removing some
financial incentives for fake news, Google sought to decrease the number of fake news
Web sites.
74
=additional (claiming that the court abridged Yelp’s First Amendment rights by ordering it to remove content
without first providing Yelp an opportunity to vindicate its rights in court).
70
It is worth noting that government intervention in markets through subsidies and, especially, taxation has
some relevant characteristics of legal regulation, including the threat of sanctions for unpaid taxes. See United
States v. American Library Association, 539 U.S. 194 (2003) (upholding a statute that required libraries
receiving federal discount for Internet access to install adult content filters on computers).
71
Nick Wingfield et al., Google and Facebook Take Aim at Fake News Sites, N.Y. TIMES (Nov. 15, 2016),
https://www.nytimes.com/2016/11/15/technology/google-will-ban-websites-that-host-fake-news-from-
using-its-ad-service.html.
72
Id.
73
Id.
74
Google’s decision to remove the funding apparatus is not wholly a market-based solution. By all accounts,
Google’s decision was motivated by an attempt to promote good digital citizenship. Google appears to be
responding also to norms about how we want our platforms to operate, or at least, Google was responding
partly to non-market forces. Like motivations and intentions, solutions can be mixed, which further
complicates the discussion.
People respond to
financial incentives.
18
Google’s decision to restrict the use of AdSense to exclude sites it deems fake
news—as an instance of regulation through markets—is likely to be both over-inclusive
and under-inclusive. First, as discussed in the section on mixed intent, determinations
at the edges between hoaxes and satire are complicated. Many commentators disagree
about where satire ends and hoaxes begin. Paul Horner—discussed in that section—
has been accused of perpetuating hoaxes while others see his site as satire. It is likely
Google’s restriction will sweep too broadly in at least some cases and chill the
production of satire, at least in the gray areas between the categories. The worry is that
short-term pressure will result in over-inclusive solutions that extend to speech that
deserves protection.
At the same time, Google’s market-based solution is likely to be under-inclusive
because it does not reach the incentives that power trolling and propaganda. In our
matrix, we illustrated how propaganda and trolling
75
are strongly motivated by non-
financial incentives. This makes market solutions ineffective at combatting these two
species. Restrictions on AdSense use will only curtail fake news production that does
not have non-financial motivations that are sufficient for its production, such as the
wholly economically motivated hoaxes by Macedonian teenagers.
C. Architecture / Code
Architecture (code, in the Internet context) constrains through the physical (or
digital) realities of the environment. This includes both built and found features of the
world. “That I cannot see through walls is a constraint on my ability to snoop. That I
cannot read your mind is a constraint on my ability to know whether you are telling me
the truth.”
76
Here, Larry Lessig provides examples of built (walls) and found (laws of
nature) realities that regulate our actions.
Under Lessig’s view, the contingency of the digital environment can either promote
or obstruct certain values. Because code is always built and never found, it provides us
with an opportunity to structure an environment
that promotes certain values (such as privacy, free
expression, etc.). Similarly, because the digital
environment is subject to change, corporate or
national interests could co-opt its workings to
suppress or alter these values.
75
The question of whether to regulate trolling and propaganda is a separate issue. Trolling may have
defenders but propaganda seemsalmost by definition--like something we want to reduce.
76
LAWRENCE LESSIG, CODE AND OTHER LAWS OF CYBERSPACE 663 (1998).
The digital environment
can either promote or
obstruct certain values.
19
Thus, the technological determination thesis of the Internet—that it must promote
these positive values—is both untrue and dangerous, because it lulls digital
communities into believing that the capacity for free expression is an inherent feature
of the Internet.
77
The structure of Facebook’s Trending Topics section demonstrates how behavior
can be constrained through architecture. With limited space in the section, selection
mechanisms that promote certain stories at the expense of others play a significant role
in determining what gets read and shared in Facebook’s digital environment. Included
stories are likely to receive more attention than excluded ones. Facebook determines
the “rules of the game” by which stories are selected to appear in Trending Topics, and
Facebook’s use of both human and algorithmic selection mechanisms is contentious.
78
When only humans determined which news stories were appropriate for inclusion,
there were concerns about bias. For example, a Gizmodo report alleged that Facebook’s
curators frequently suppressed politically conservative perspectives.
79
In response, the
U.S. Senate Commerce Committee launched an
inquiry—spearheaded by Republican Senator John
Thune—into Facebook’s processes, including
whether conservative stories were intentionally
suppressed or more liberal stories were intentionally
added into the section.
80
Partially in response to these concerns about
bias, Facebook altered the selection process for
Trending Topics to be more automated and require
fewer human decisions.
81
However, with the reduced role of human editors, hoaxes on
Facebook flourished.
82
A fake news story that anchor Megyn Kelly was fired from Fox
News because she supported Hillary Clinton went viral, as did many other instances of
fake news.
83
Facebook’s architecture is optimized for stories that are likely to produce
77
The contingency of free expression on the Internet is much more apparent now than it was when Lessig
first published Code in 1998. See EVGENY MOROZOV, THE NET DELUSION (2012).
78
Nick Hopkins, Revealed: Facebook’s internal rulebook on sex, terrorism and violence, THE GUARDIAN (May 21,
2017), https://www.theguardian.com/news/2017/may/21/revealed-facebook-internal-rulebook-sex-
terrorism-violence.
79
Michael Nunez, Senate GOP Launches Inquiry into Facebook’s News Curation, GIZMODO (May 10, 2016),
http://gizmodo.com/senate-gop-launches-inquiry-into-facebook-s-news-curati-1775767018.
80
Id.
81
Id.; see also Facebook, Search FYI: An Update to Trending (Aug. 16 2016),
https://newsroom.fb.com/news/2016/08/search-fyi-an-update-to-trending/.
82
See Caplan, How do you deal with a problem like fake news?.
83
Id.
Architecture alone is not
up to the task of providing
useful distinctions
between satire and hoax,
nor is it an effective
remedy for propaganda.
20
clicks and shares.
84
Fake news is likely to cause users to distribute its content, often by
confirming biases, which in turn makes it proliferate through Facebook’s news
ecosystem.
85
Distinguishing between satire and more pernicious forms of fake news requires
human judgment (at least with the current state of algorithmic selection). Architecture
alone is not up to the task of providing useful distinctions between satire and hoax, nor
is it an effective remedy for propaganda. If anything, the current architecture of social
networking platforms favors the spread of fake news instead of limiting it. This is
because Facebook and other social networking sites optimize their algorithms to display
stories that users are likely to share.
86
Fake news stories are often popular, in part by
being inflammatory or catering to pre-existing viewpoints.
87
When this happens, users
are likely to share the fake news story within their networks.
D. Norms
Social norms constrain behavior by pressuring individuals to conform to certain
standards and practices of conduct.
88
They structure how we communicate with each
other and seem to be a useful starting point for informal regulation of fake news. For
instance, Seana Shiffrin advocates for a norm of sincerity to govern our speech with
others. Interestingly, Shiffrin claims that this “duty of sincerity” arises from the opacity
of other people’s minds and our moral need to understand each other.
89
This maps
nicely to the problems that plague the classification of fake news—mainly, that mental
content is private. This analytical similarity makes inculcating norms of sincerity a good
starting point for stemming fake news that we find harmful; however, it has
complications of its own.
First, norms arise organically and are usually not the result of design and planning.
90
Unlike legal rules, it is hard, and maybe impossible, to summon them out of nothing. It
is one thing to say that we ought to have certain norms and quite another to bring the
84
See Frischmann & Selinger, Why it’s dangerous to outsource our critical thinking to computers.
85
Id.
86
See Brett Frischmann & Mark Verstraete, We need our platforms to put people and democratic society ahead of cheap
profits, RECODE (June 16, 2017), https://www.recode.net/2017/6/16/15763388/facebook-fake-news-
propaganda-federated-social-network-bbc-trus-surveillance-capitalism.
87
Id.
88
ROBERT ELLICKSON, ORDER WITHOUT LAW (1991); see also Lisa Bernstein, Opting Out of the Legal System, 21
J. LEGAL STUD. 1 (1992).
89
SEANA SHIFFRIN, SPEECH MATTERS: ON LYING, MORALITY, AND THE LAW 184 (2014).
90
Cristina Bicchieri & Ryan Muldoon, Social Norms, in STANFORD ENCYCLOPEDIA OF PHILOSOPHY (Mar. 1,
2011), https://plato.stanford.edu/entries/social-norms/.
21
desired norms into practice.
91
This is a practical limitation on implementing norms to
govern behavior.
Second, norms are often nebulous and diverse. When it comes to limitations on
speech, the conventional wisdom—and what is constitutionally required when the
government regulates speech—is to tie the regulation to a concrete harm as closely as
possible.
92
The fear is that regulation will intrude on fundamental values and chill free
expression. Similarly, because norms are nebulous, a norm of sincerity would likely pick
out all of our species of fake news (even The Onion, which is the paradigmatic case of
satire and thus worthy of protection). Finally, as some commentators have noted,
norms may be harder to enforce online.
93
PART IV. A WAY FORWARD
Fake news is a complex phenomenon that resists simple or quick solutions. Any
intervention must strike a delicate balance by offering a sufficiently robust response to
fake news while also not causing more harm than the inaccurate information does. In
this Part, we offer potential models for such interventions, while acknowledging that
each proposal is likely to solve only a segment of the problem. Rather than endorsing
any of these models—or even suggesting that they be adopted as a package—we intend
the proposals to generate debate and dialogue about how solutions ought to be
structured and about the trade-offs they will produce. We organize these model
interventions based on Lessig’s four modalities, as we did earlier in categorizing fake
news.
A. Law
Legal interventions for fake news are limited by law itself in two ways: as a matter
of First Amendment doctrine, and as a matter of federal statute. Liability for creating or
distributing fake news is constrained by the Constitution – political speech is at the
heart of First Amendment protection
94
, and the Supreme Court has recently applied
91
This challenge was central to the difficulties of combating copyright infringement over peer-to-peer
networks. See Yuval Feldman & Janice Nadler, The Law and Norms of File Sharing, 43 SAN DIEGO L. REV. 577
(2006).
92
This is the structure of strict scrutiny analysis for speech. See Brown v. Entm’t Merchants Ass’n, 564 U.S.
786, 799 (2011) (noting when a law “imposes a restriction on the content of protected speech, it is invalid
unless [the government] can demonstrate that it passes strict scrutinythat is, unless it is justified by a
compelling government interest and is narrowly drawn to serve that interest”).
93
Jessa Lingel & dannah boyd, “Keep it Secret, Keep it Safe”: Information Poverty, Information Norms, and Stigma, 64 J.
AM. SOCY INFO. SCI. & TECH. 981 (2013).
94
See U.S. v. Alvarez, 567 U.S. __ (2012).
22
more searching scrutiny, as a practical matter, to commercial speech as well
95
. Even
openly false
96
political content is heavily protected. Similarly, federal statutes such as
Section 230 of the Communications Decency Act
97
and Title II of the Digital
Millennium Copyright Act
98
limit liability for publishers and distributors (though not
authors) of tortious or copyright-infringing material. Moreover, augmenting liability for
fake news is not likely to be effective. Platforms face a daunting task in policing the
flood of information posted to their servers each day
99
, and a sizable judgment can be
fatal to a site.
100
Most authors are judgment-proof—unable to pay damages in any
meaningful amount—and may be difficult to identify or be beyond the reach of U.S.
courts. Overall, there is a consensus in the United States that the Internet information
ecosystem is best served by limiting liability, not increasing it.
101
However, this consensus does highlight one useful change that law could make to
combat fake news. The immunity conferred under Section 230 was intended to create
incentives for intermediaries to police problematic content on their platforms, without
fear of triggering liability for performing this gatekeeping function.
102
In recent years,
though, a series of decisions have chipped away at Section 230’s immunity, creating
both risk and uncertainty for platforms.
103
Statutory reform could fill the cracks in
Section 230 immunity, reducing both risk and cost for platforms. As cases such as the
lawsuits against the Web sites Ripoff Report
104
and Yelp!
105
show, Internet firms may
95
See, e.g., Sorrell v. IMS Health, 564 U.S. 552 (2011) (prescription information); Matal v. Tam, 582 U.S. __
(2017) (trademarks); Expressions Hair Design v. Schneiderman, 581 U.S. __ (2017) (credit card surcharge
statements); see generally Jane R. Bambauer & Derek E. Bambauer, Information Libertarianism, 105 CAL. L. REV.
335 (2017).
96
See Sullivan, 376 U.S. 254; Alvarez, 567 U.S. __.
97
47 U.S.C. § 230.
98
17 U.S.C. § 512.
99
See generally H. Brian Holland, In Defense of Online Intermediary Immunity: Facilitating Communities of Modified
Exceptionalism, 56 KANSAS L. REV. 369 (2008); David S. Ardia, Free Speech Savior or Shield for Scoundrels: An
Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act, 43 LOYOLA L.A. L.
REV. 373 (2010).
100
See Sydney Ember, Gawker, Filing for Bankruptcy After Hulk Hogan Suit, Is for Sale, N.Y. TIMES (June 10,
2016), https://www.nytimes.com/2016/06/11/business/media/gawker-bankruptcy-sale.html.
101
See generally Eric Goldman, Online User Account Termination and 47 U.S.C. § 230(c)(2), 2 U.C. IRVINE L. REV.
659 (2012). There may be harms that justify curtailing Section 230’s immunity from liability, but fake news
does not yet rise to that level. See generally Danielle Citron, Revenge Porn and the Uphill Battle to Pierce Section 230
Immunity (Part II), CONCURRING OPINIONS (Jan. 25, 2013),
https://concurringopinions.com/archives/2013/01/revenge-porn-and-the-uphill-battle-to-pierce-section-
230-immunity-part-ii.html.
102
See Zeran v. Am. Online, 129 F.3d 327 (4
th
Cir. 1997).
103
See Eric Goldman, Ten Worst Section 230 Rulings of 2016 (Plus the Five Best), TECH. & MKTG. L. BLOG (Jan. 4,
2017), http://blog.ericgoldman.org/archives/2017/01/ten-worst-section-230-rulings-of-2016-plus-the-five-
best.htm; Eric Goldman, The Regulation of Reputational Information, in THE NEXT DIGITAL DECADE: ESSAYS ON
THE FUTURE OF THE INTERNET 293 (Berin Szoka & Adam Marcus, eds., 2010).
104
See Vision Security v. Xcentric Ventures, No. 2:13-cv-00926-CW-BCW (D. Utah Aug. 27, 2015), available at
http://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=2036&context=historical.
23
face legal risks from hosting both truthful and allegedly false information. Increased
immunity would enable platforms to filter information with confidence that their
decisions would not open them up to lawsuits and damages.
In particular, Congress could consider three specific textual changes to Section 230.
The first would change Section 230(e)(3), to read: “No
cause of action may be brought, and no liability may be
imposed, under any state or local law that is inconsistent
with this section. A court shall dismiss any such cause of action or
suit with prejudice when it is filed, or upon motion of any party to
such cause of action or suit.”
106
This would authorize—and
indeed require—courts to dismiss lawsuits that run counter to Section 230 immunity on
their own authority, without requiring defendants to answer a complaint or incur
litigation costs. In addition, the change emphasizes that the focus is on laws that are
inconsistent with Section 230, rather than implicitly encouraging courts to search for ways
of making them consistent.
Second, Congress could reduce the ability to bypass Section 230 immunity through
exploiting the exception for intellectual property (IP) claims. It is easy for creative
plaintiffs’ attorneys to re-characterize tort causes of action—which should be pre-
empted by Section 230 immunity—as intellectual property ones, which are not pre-
empted in most circuits.
107
For example, a defamation claim can be readily re-cast as
one for infringement of the plaintiff’s right of publicity; in most states, the right of
publicity is treated as an intellectual property right that protects against the use of one’s
name or likeness for commercial or financial gain.
108
Congress could change Section
230(e)(2) to allow only suits based on federal intellectual property laws to circumvent
immunity, by altering the text to read: “Nothing in this section shall be construed to
limit or expand any law pertaining to federal intellectual property” (change italicized).
While the proposed change does not completely foreclose creative pleading, it reduces
its scope by removing claims based in state law.
105
See Tim Cushing, California Appeals Court Reaffirms Section 230 Protections In Lawsuit Against Yelp For Third-
Party Postings, TECHDIRT (July 19, 2016),
https://www.techdirt.com/articles/20160716/14115134996/california-appeals-court-reaffirms-section-230-
protections-lawsuit-against-yelp-third-party-postings.shtml.
106
The italics indicate added text. The change would also delete the first sentence of § 230(e)(3), and add two
commas to what is currently the second sentence.
107
Compare Perfect 10 v. CCBill, 488 F.3d 1102 (9
th
Cir. 2007) (pre-empting state IP claims under Section 230)
with Universal Communications Sys. v. Lycos, Inc., 478 F.3d 413 (1
st
Cir. 2007) (permitting state IP claims
under Section 230).
108
See, e.g., CAL. CIV. CODE § 3344, available at http://codes.findlaw.com/ca/civil-code/civ-sect-3344.html.
Augmenting liability
for fake news is not
likely to be effective.
24
Finally, Congress could reverse the most pliable and pernicious exception to
Section 230 immunity, where courts hold defendants liable for being “responsible, in
whole or in part, for the creation or development of information.”
109
Courts have used
the concept of being partly responsible for the creation or development of information
to hold platforms liable for activities such as structuring the entry of user-generated
information
110
or even focusing on a particular type of information
111
. Logically, a
platform is always partly responsible for the creation or development of information –
it provides the forum by which content is generated and disseminated. And, platforms
inherently make decisions to prioritize certain content, and to create incentives to
spread it across the network, such as where Facebook’s algorithms accentuate
information that is likely to produce user engagement. If that activity vitiated Section
230 immunity, though, it would wipe out the statute. A strong version of statutory
reform would change Section 230(f)(3) to read: “The term ‘information content
provider’ means the person or entity that is wholly responsible for the creation or
development of information provided through the Internet or any other interactive
computer service” (change italicized). If this alteration seems to risk allowing the actual
authors or creators of fake news to escape liability by arguing they were not entirely
responsible for its generation, Congress could adopt a more limited reform by changing
the statutory text to read: “The term ‘information content provider’ means any person
or entity that is chiefly responsible for the creation or development of information
provided through the Internet or any other interactive computer service” (change
italicized). This would assign liability only to the entity most responsible for the
generation of the information at issue.
These proposed reforms to Section 230 immunity would harness law to reduce legal
liability for Internet platforms and to encourage intermediaries to filter fake news
without risk of lawsuits or damages.
B. Markets
Market-based solutions provide an appealing starting point for managing fake news.
One species of fake newshoaxes—responds particularly well to altering the
economic structure that drives its production. Many creators of hoaxes are driven
mainly (or solely) by the potential profit that these fake news stories can provide.
Because of this, interventions that change the profitability of fake news should result in
the production of fewer hoaxes.
109
47 U.S.C. § 230(f)(3) (emphasis added).
110
See, e.g., Fair Housing Council of San Fernando Valley v. Roommates.com, 521 F.3d 1157 (9
th
Cir. 2008).
111
See, e.g., NPS LLC v. StubHub, 2006 WL 5377226 (Mass. Sup. Ct. 2006).
25
However, only addressing the economic incentives that attend the creation of
hoaxes is an incomplete reaction. First, other types of fake news are not as responsive
to economic incentives. For instance, propaganda is driven primarily by non-financial
motivations, so solutions that only change pecuniary incentive structures are unlikely to
alter the production of propaganda. Second, authors are not the only entities motivated
by economic factors to produce fake news -- platforms are also optimized to spread
fake news for financial gain. Addressing the economic incentives of social media
platforms requires different market interventions than those directed towards creators.
Some fake news may be a symptom of surveillance capitalism, the economic model
underlying many Internet platforms that monetizes collecting data and using it to
effectively serve advertisements.
112
In this sense, fake news—and other stories that play
to our cognitive biases to harvest clicks—are key to Facebook’s business model
because this information increases user activity, which, in turn, allows Facebook to
more effectively tailor its advertisements. Understanding fake news as a symptom of
these deeper structural issues requires that solutions introduce an entirely new incentive
structure to digital platforms.
Recognition of the economic incentives that underlie
proprietary social networking sites has spurred other
attempts to create non-market alternatives. Federated
social networks such as diaspora* were introduced as an
alternative to Facebook and other proprietary
platforms.
113
These social networking arrangements
offered the possibility of protecting user privacy because their business model did not
require widespread collection of user data. Similarly, social networks that do not rely on
collecting user data would potentially limit the spread of hoaxes that generate user
engagement and increase platform profitability. However, these networks have yet to
achieve success, in terms of user base or funding, that even begins to compete with
sites such as Facebook.
114
112
Evgeny Morozov, Moral panic over fake news hides the real enemydigital giants, THE GUARDIAN (Jan. 7, 2017),
https://www.theguardian.com/commentisfree/2017/jan/08/blaming-fake-news-not-the-answer-democracy-
crisis; see also Frischmann & Selinger, Why it’s dangerous to outsource our critical thinking to computers.
113
See Welcome to diaspora*, https://diasporafoundation.org/.
114
See Will Oremus, The Search for the Anti-Facebook, SLATE (Oct. 28, 2014),
http://www.slate.com/articles/technology/future_tense/2014/10/ello_diaspora_and_the_anti_facebook_w
hy_alternative_social_networks_can.html; JIM DWYER, MORE AWESOME THAN MONEY: FOUR BOYS AND
THEIR HEROIC QUEST TO SAVE YOUR PRIVACY FROM FACEBOOK (2014).
Fake news may be
a symptom of
surveillance capitalism.
26
Still, non-market-based social networking alternatives may not limit the creation and
spread of propaganda.
115
One way forward would be for a trusted media entity—like
the British Broadcasting Company (BBC)—to create a social network that is not
financed through advertising and that leverages its media expertise to make judgments
about news content.
116
This strategy has at least two benefits.
First, while the non-commercial funding model creates a remedy for hoaxes, it is
worth noting that the BBC is not funded by the UK government, but is instead paid for
through private licenses purchased by every household that watches any live
television.
117
This funding structure insulates the BBC from being pressured into
promoting the government’s narrative, although it is ultimately dependent upon
enforcement by the government. This license model also insulates a potential social
networking platform from the economic incentives that force Facebook to select for
hoaxes and other fake news in order to increase profitability.
Second, the BBC can provide a remedy to non-financially motivated fake news
(specifically propaganda). The BBC has an elite staff of editors and journalists who can
make difficult editorial judgments about propaganda. Editors have the requisite
expertise to determine if a narrative is baseless and is promulgated simply to manipulate
people. Although there are many details to work out with this new model, it provides a
remedy to both financially and non-financially motivated fake news.
However, this potential solution has limitations. Like federated social networks, a
BBC social networking platform may fail to draw a critical mass of users. Social
networks are governed by network effects, which make platforms with a large user base
more desirable than platforms with very few users. It may be difficult to entice people
to switch away from Facebook when all their friends and family still use it.
Implementation of the license model may require government action to enforce any
requirement to purchase licenses. Management of the license fee mechanism could be
costly. Finally, imposing the cost of licenses on users may be unpopular, especially
when Facebook is free.
115
For example, ISIS has used the diaspora* network to spread propaganda after being forced off Twitter.
Islamic State shifts to new platforms after Twitter block, BBC NEWS (Aug. 21, 2014),
http://www.bbc.com/news/world-middle-east-28843350. The network’s decentralized architecture has made
its organizers unable to respond effectively or to remove the ISIS content. Islamic State fighters on diaspora*,
https://blog.diasporafoundation.org/4-islamic-state-fighters-on-diaspora (Aug. 20, 2014).
116
Brett Frischmann, Understanding the Role of the BBC as a Provider of Public Infrastructure, CARDOZO LEGAL
STUDIES RESEARCH PAPER NO. 507, available at
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2897777 (calling for the BBC to consider creating a
social media network); see also Frischmann & Verstraete, We need our platforms to put people and democratic society
ahead of cheap profits.
117
See The Licence Fee, BBC, http://www.bbc.co.uk/aboutthebbc/insidethebbc/whoweare/licencefee/.
27
C. Architecture / Code
Code-based interventions seem to hold considerable promise for managing fake
news. The Internet platforms that are the principal distribution mechanisms for this
information run on code: it defines what is permitted or forbidden, what is given
prominence, and what (if anything) is escalated for review by human editors. While
software code requires an initial investment in development and debugging, it is nearly
costless to deploy afterwards. Code runs automatically, and constantly. More
sophisticated algorithms may be capable of a form of learning over time, enabling them
to improve their accuracy.
However, code also has drawbacks. At present, even
sophisticated programs have trouble parsing human
language. Software is challenged by nuance and context – a
fake news item and a genuine report are likely to have similar
terms, but vastly different meanings. Code will inevitably make mistakes, classifying real
news as fake, and vice versa. Inevitably, software programs have bugs, and humans will
try to take advantage of them.
Nonetheless, code-based solutions have potential to reduce the effects of fake
news. It is unsurprising that a number of Internet platforms have begun testing
software-based interventions. Twitter has developed a prototype feature for crowd-
sourcing the identification of fake news; users would be able to single out Tweets with
false or misleading information for review or, potentially, de-listing.
118
The company is
already attempting to identify characteristics that indicate a Tweet is fake news,
including via algorithms and associations with known reliable (or unreliable) sources.
119
Facebook has moved to tag posts as fake news, relying on users to identify suspect
posts and independent monitors to make a final determination.
120
The social network
may reduce the visibility of fake news stories in users’ feeds based on these
judgments.
121
However, critics have challenged Facebook’s efforts as ineffective, if not
counterproductive.
122
Google has redesigned its News page to include additional fact-
118
Elisabeth Dwoskin, Twitter is looking for ways to let users flag fake news, offensive content, WASH. POST (June 29,
2017), https://www.washingtonpost.com/news/the-switch/wp/2017/06/29/twitter-is-looking-for-ways-to-
let-users-flag-fake-news/.
119
Id.
120
Facebook, How is news marked as disputed on Facebook?, https://www.facebook.com/help/733019746855448;
Amber Jamieson & Olivia Solon, Facebook to begin flagging fake news in response to mounting criticism, THE
GUARDIAN (Dec. 15, 2016), https://www.theguardian.com/technology/2016/dec/15/facebook-flag-fake-
news-fact-check.
121
Id.
122
Sam Levin, Facebook promised to tackle fake news. But the evidence shows it's not working, THE GUARDIAN (May 16,
2017), https://www.theguardian.com/technology/2017/may/16/facebook-fake-news-tools-not-working.
Code will inevitably
make mistakes.
28
checking information from third-party sites
123
, which it also includes alongside its
search results
124
. And, Google users can flag Autocomplete suggestions or the search
engine’s “Featured Snippets” as fake news.
125
Thus far, platforms have attempted to contextualize fake news by generating
additional relevant information using algorithms, but other code-based responses are
also possible. For example, firms could employ user feedback in determining where
information appears in one’s Twitter timeline or Facebook News Feed – or, indeed, if it
appears there at all. The tech news site Slashdot enables selected users to moderate
comments by designating them as good or bad; this scoring increases or decreases the
visibility of the comments.
126
Similarly, platforms could identify, and remove, known
fake news items or sources by “fingerprinting” them or by evaluating them using
algorithms.
127
While this intervention requires subjective determinations by Internet
companies, most already censor some information: Facebook does not permit nudity
128
;
Google removes child pornography
129
and certain information that violates individual
privacy rights
130
; Twitter has moved to purge hate speech
131
. Since they already curate
information, sites could reward or penalize users based on the content they post:
people who post genuine news could gain greater visibility for their information or
functionality for their accounts, while those who
consistently disseminate fake news might be banned
altogether. Finally, platforms might make some initial,
broad-based distinctions based upon the source of the
information: the New York Times (as genuine news)
123
Joseph Lichterman, Google News launches a streamlined redesign that gives more prominence to fact checking,
NIEMANLAB (June 27, 2017), http://www.niemanlab.org/2017/06/google-news-launches-a-streamlined-
redesign-that-gives-more-prominence-to-fact-checking/.
124
April Glaser, Google is rolling out a fact-check feature in its search and news results, RECODE (Apr. 8, 2017),
https://www.recode.net/2017/4/8/15229878/google-fact-check-fake-news-search-news-results.
125
Hayley Tsukayama, Google’s asking you for some help to fix its ‘fake news’ problem, WASH. POST (Apr. 25,
2017),https://www.washingtonpost.com/news/the-switch/wp/2017/04/25/googles-asking-you-for-some-
help-to-fix-its-fake-news-problem/.
126
CmdrTaco, Slashdot Moderation, SLASHDOT, https://slashdot.org/moderation.shtml.
127
For example, Google uses its Content ID system to scan videos uploaded to YouTube to identify material
that may infringe copyright. YouTube, How Content ID works,
https://support.google.com/youtube/answer/2797370?hl=en.
128
See Julia Angwin & Hannes Grassegger, Facebook’s Secret Censorship Rules Protect White Men from Hate Speech
But Not Black Children, PROPUBLICA (June 28, 2017), https://www.propublica.org/article/facebook-hate-
speech-censorship-internal-documents-algorithms.
129
Robinson Meyer, The Tradeoffs in Google's New Crackdown on Child Pornography, THE ATLANTIC (Nov. 18,
2013), https://www.theatlantic.com/technology/archive/2013/11/the-tradeoffs-in-googles-new-crackdown-
on-child-pornography/281604/.
130
Google Search Help, Removal Policies, https://support.google.com/websearch/answer/2744324?hl=en.
131
Twitter takes new steps to curb abuse, hate speech, CBS NEWS (Feb. 7, 2017),
http://www.cbsnews.com/news/twitter-crack-down-on-abuse-hate-speech/.
Platforms could identify,
and remove, known fake
news items or sources.
29
and The Onion (as satire) could be whitelisted, while InfoWars and Natural News (as fake
news) could be blacklisted. This would leave substantial amounts of information for
further analysis, but could at least use code to process easy cases.
Code-based solutions have limitations, but show promise as part of a strategy to
address fake news.
D. Norms
Norms are a potent regulatory tool: they are virtually costless to regulators once
created, enjoy distributed enforcement through social mechanisms, and may be
internalized by their targets for self-enforcement. Yet these same characteristics make
them difficult to wield. It is challenging to create, shift, or inculcate norms—campaigns
against smoking worked well
132
, while ones against copyright infringement and
unauthorized downloading were utter failures
133
. Changes in norms are unpredictable,
as are the interactions between norms and other regulatory modalities. Part of the move
by platforms such as Google and Facebook to engage in greater fact-checking of news
stories relies upon norms—if users do not internalize the norm of verifying
information, then these efforts will come to naught. And,
these efforts must reckon with the reality that fake news is
popular for some viewers, particularly when it has the
effect of confirming their pre-existing beliefs. The norm
of fact-checking comes into conflict with the
psychological tendency to validate confirmatory
information and to discount contrarian views.
134
Thus, while the prospect of acting as a
norm entrepreneur to combat fake news is an appealing one, its likelihood of success is
uncertain.
135
One norm-based intervention would be for platforms to use their own reputation
and credibility to combat fake news. At present, entities such as Google and Facebook
outsource the role of contextualizing or disputing false information to other entities
such as Snopes or the Associated Press. Tagging stories as “disputed” or displaying
alternative explanations alongside them is implicitly a form of commentary by the
132
See, e.g., Benjamin Alamar & Stanton A. Glantz, Effect of Increased Social Unacceptability of Cigarette Smoking on
Reduction in Cigarette Consumption, 96 AM. J. PUB. HEALTH 1359 (2006).
133
See John Tehranian, Infringement Nation: Copyright Reform and the Law/Norm Gap, 2007 UTAH L. REV. 537;
Stuart P. Green, Plagiarism, Norms, and the Limits of Theft Law: Some Observations on the Use of Criminal Sanctions in
Enforcing Intellectual Property Rights, 54 HASTINGS L.J. 167 (2002).
134
See David Braucher, Fake News: Why We Fall For It, PSYCHOLOGY TODAY (Dec. 28, 2016),
https://www.psychologytoday.com/blog/contemporary-psychoanalysis-in-action/201612/fake-news-why-
we-fall-it; Elizabeth Kolbert, Why Facts Don’t Change Our Minds, NEW YORKER (Feb. 27, 2017),
http://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds.
135
See generally Cass R. Sunstein, Social Norms and Social Roles, 96 COLUM. L. REV. 903 (1996).
It is challenging to
create, shift, or
inculcate norms.
30
platform. However, it is one that largely masks the intermediary’s role, particularly since
the countervailing information comes under a different brand and because Google,
among others, tries to portray its search results as organic, rather than artificially
constructed.
136
Platforms could, though, be more direct and explicit in taking positions about fake
news stories.
137
The Internet scholar Evgeny Morozov offers one potential model. In
2012, he urged Google to take a more overt role in opposing discredited theories such
as those promulgated by the anti-vaccine movement and 9/11 conspiracy theory
adherents.
138
Morozov’s proposal is not censorship: he does not advocate altering
search results or removing fake news. Rather, he wants platforms to alert their users
that they are at risk of consuming false information, and to provide them with an
alternative path to knowledge that has been verified as accurate. He suggests that
“whenever users are presented with search results that are likely to send them to sites
run by pseudoscientists or conspiracy theorists, Google may simply display a huge red
banner asking users to exercise caution and check a previously generated list of
authoritative resources before making up their minds.”
139
Morozov notes that Google
already intervenes in similar fashion for users in some countries when they search for
information about suicide or similar self-harm.
140
And, Google famously added a
disclaimer to its search results when the top site corresponding to a search for “Jew”
was that of a neo-Nazi group.
141
Similarly, the firm changed its autocomplete
suggestions for searches when they included offensive assertions about Jews, Muslims,
and women.
142
By extending Morozov’s model, platforms could counter fake news stories and
results by explicitly dissociating their companies from them and by offering alternative
136
See generally Dave Davies, The Death of Organic Search (As We Know It), SEARCH ENGINE J. (Mar. 29, 2017),
https://www.searchenginejournal.com/death-organic-search-know/189625/.
137
Facebook does take a direct role in deciding what content to permit in its News Feeds, or to remove from
them, following a complicated model that permits critiques of groups but not of sub-groups. However, the
site’s criteria are hardly explicit or transparent. See Angwin & Grassegger, Facebook’s Secret Censorship Rules
Protect White Men from Hate Speech But Not Black Children.
138
Evgeny Morozov, Warning: This Site Contains Conspiracy Theories, SLATE (Jan. 23, 2012),
http://www.slate.com/articles/technology/future_tense/2012/01/anti_vaccine_activists_9_11_deniers_and
_google_s_social_search_.single.html.
139
Id.
140
Id.; see Google, Helping you find emergency information when you need it (Nov. 11, 2010),
https://googleblog.blogspot.com/2010/11/helping-you-find-emergency-information.html.
141
See Danny Sullivan, Google In Controversy Over Top-Ranking For Anti-Jewish Site, SEARCH ENGINE WATCH
(Apr. 24, 2004), https://searchenginewatch.com/sew/news/2065217/google-in-controversy-over-top-
ranking-for-anti-jewish-site.
142
Samuel Gibbs, Google alters search autocomplete to remove 'are Jews evil' suggestion, THE GUARDIAN (Dec. 5, 2016),
https://www.theguardian.com/technology/2016/dec/05/google-alters-search-autocomplete-remove-are-
jews-evil-suggestion.
31
information on their own account, under the companies’ brands.
143
Users might well
pay more attention to an express statement of disavowal by Facebook than they would
to analysis by an unrelated third party such as the Associated Press. In effect, platforms
would leverage their credibility against fake news.
This proposal has drawbacks.
144
First, it requires platforms to explicitly take a
position on particular fake news stories, which they have been reluctant to do even in
clear cases.
145
When fake news is popular, opposing it may make platforms unpopular,
which is a difficult undertaking for publicly-traded
companies in a competitive market. Second, it functions
best (and perhaps only) for stories or results that are
clearly and verifiably false.
146
There is empirical proof
that the Earth is not flat, or that its climate is warming.
But even though most scientists agree that humans
contribute significantly to global warming, the issue is not completely free from
doubt.
147
And some issues remain unsettled, such as whether increases in the minimum
wage reduce employment or help employees.
148
Platforms will have to adopt standards
for when to implement disclaimers or warnings, and critics will attack those
standards.
149
Finally, there is the risk of expanding demands for warnings or context.
Platforms who retreat from a position of overt neutrality could face pressure to
contextualize other allegedly negative information, from critical reviews of restaurants
to disputed claims over nation-state borders. This possibility (perhaps a probability)
143
Danny Sullivan offered a similar suggestion to counteract, or at least contextualize, the results obtained
when one searches for the term “Santorum” on Bing. Danny Sullivan, Why Does Microsoft’s Bing Search Engine
Hate Rick Santorum?, SEARCH ENGINE LAND (Feb. 8, 2012), http://searchengineland.com/why-does-bing-
hate-rick-santorum-110764.
144
See generally Adam Thierer, Do We Need a Ministry of Truth for the Internet?, FORBES (Jan. 29, 2012),
https://www.forbes.com/sites/adamthierer/2012/01/29/do-we-need-a-ministry-of-truth-for-the-internet/ -
20ea49d91f51.
145
See Jeff John Roberts, A Top Google Result for the Holocaust Is Now a White Supremacist Site, FORTUNE (Dec.
12, 2016), http://fortune.com/2016/12/12/google-holocaust/.
146
See Derek Bambauer, Santorum: Please Don’t Google, INFO/LAW (Feb. 29, 2012),
https://blogs.harvard.edu/infolaw/2012/02/29/santorum-please-dont-google/.
147
See generally INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE, CLIMATE CHANGE 2014: SYNTHESIS
REPORT, CONTRIBUTION OF WORKING GROUPS I, II AND III TO THE FIFTH ASSESSMENT REPORT OF THE
INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE (R.K. Pachauri and L.A. Meyer, eds., 2014).
148
See Ekaterina Jardim et al., Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle,
NATL BUR. ECON. RES. WORKING PAPER 23532 (June 2017), available at
https://evans.uw.edu/sites/default/files/NBER Working Paper.pdf; Rachel West, Five Flaws in a New
Analysis of Seattle’s Minimum Wage, CTR. FOR AM. PROGRESS (June 28, 2017),
https://www.americanprogress.org/issues/poverty/news/2017/06/28/435220/five-flaws-new-analysis-
seattles-minimum-wage/.
149
See Angwin & Grassegger, Facebook’s Secret Censorship Rules Protect White Men from Hate Speech But Not Black
Children.
Platforms could
leverage their credibility
against fake news.
32
would likely increase firms’ reluctance to engage in express curation or discussion of
third-party content.
Despite the difficulties in operationalizing norms-based interventions, they could
prove a potent part of a remedy for fake news.
CONCLUSION
Fake news presents a complex regulatory challenge in the increasingly democratized
and intermediated on-line information ecosystem. Inaccurate information is readily
created; rapidly distributed by platforms motivated more by financial incentives than by
journalistic norms or the public interest; and consumed eagerly by users for whom it
reinforces existing beliefs. Yet even as awareness of the problem grew after the 2016
U.S. presidential election, the meaning of the term “fake news” has becoming
increasingly disputed. This report addresses that definitional challenge, offering a useful
taxonomy that classifies species of fake news based on their creators’ intent to deceive
and motivation. In particular, it identifies four key categories: satire, hoax, propaganda,
and trolling. This analytical framework will help policymakers and commentators alike
by providing greater rigor to debates over the issue.
Next, the report identifies key structural problems that make it difficult to design
interventions that can address fake news effectively. These include the ease with which
authors can produce user-generated content online, and the financial stakes that
platforms have in highlighting and disseminating that material. Authors often have a
mixture of motives in creating content, making it less likely that a single solution will be
effective. Consumers of fake news have limited incentives to invest in challenging or
verifying its content, particularly when the material reinforces their existing beliefs and
perspectives. Finally, fake news rarely appears alone: it is frequently mingled with more
accurate stories, such that it becomes harder to categorically reject a source as
irredeemably flawed.
Then, the report classifies existing and proposed interventions based upon the four
regulatory modalities catalogued by Larry Lessig: law, architecture (code), social norms,
and markets. It assesses the potential and shortcomings of extant solutions.
Finally – and perhaps most important – the report offers a set of model
interventions, classified under the four regulatory modalities, to generate discussion and
to provide a starting point for policymakers who want to reduce the effects of fake
news.
33
Fake news is not new: it has a long provenance, stretching through newspaper
reports blaming the sinking of the U.S.S. Maine on Spain in 1898 and beyond.
150
It is a
persistent, hardy problem in a world of networked social information. Our goal with
this report is to create a foundation to help advance dialogue about fake news and to
suggest tools that might mitigate its most pernicious aspects.
150
See Christopher Woolf, Back in the 1890s, fake news helped start a war, PRI (Dec. 8, 2016),
https://www.pri.org/stories/2016-12-08/long-and-tawdry-history-yellow-journalism-america.