In August 2017, a political protest in Charlottesville,
Virginia, turned into violent clashes between extremists, leading
to one person being killed. In the aftermath, several tech
companies denied service to neo-Nazis whose extreme rhetoric was
thought to foster that violence. Denied a forum, the extremists
retreated from the most widely used part of the internet to the
dark web. Matthew Prince, the CEO of Cloudflare, one of the
companies that drove the National Socialists out, argued later that
businesses lack the legitimacy to govern speech on their
forums.1 He suggested that most people see
government as the proper authority to suppress speech related to
This policy analysis follows up on Prince’s comments by
evaluating the legitimacy of government regulation of speech on the
internet. We shall focus primarily on potential policies for the
Our effort advances in two parts. First, we establish a starting
point for our analysis. We show that the values and practices of
the public demonstrate a legitimate but quite limited role for
government regulation of speech on the internet and elsewhere. The
public and policymakers prefer private governance of speech. Those
who wish to introduce new public regulation of social media must
overcome this presumption for the private. In the second part, we
show that arguments for new public efforts fail to do that. We find
that private content moderators have already taken effective and
innovative steps to deal with some of these problems. It is private
content moderators, not elected or appointed officials, who should
have the power to regulate speech on social media.
The Presumption against Public Regulation of Social Media
What are social media? Many experts have offered different
definitions. Tom Standage, deputy editor of The Economist,
says social media are “two-way, conversational environments
in which information passes horizontally from one person to another
along social networks, rather than being delivered vertically from
an impersonal central source.”3 He
identifies aspects of social media that bear on free
[Social media] allow information to be shared along social
networks with friends or followers (who may then share items in
turn), and they enable discussion to take place around such shared
information. Users of such sites do more than just passively
consume information, in other words: they can also create it,
comment on it, share it, discuss it, and even modify it. The result
is a shared social environment and a sense of membership in a
Jonathan A. Obar and Steve Wildman of the Quello Center at
Michigan State University add that social media are interactive and
“can be characterized as a shift from user as consumer to
user as participant.”5 Of
course, social media users also consume what others create, but
they are no longer primarily consumers of existing material. Thus,
content generated by users is the “lifeblood of social
Social media are platforms, not publishers. They provide the
means for large numbers of people to produce and consume
information. They are open to both producers and consumers. Social
media managers regulate the content on a platform, but the platform
does not host everything that is posted on it. The regulation is
necessarily ex post. The number of users and their
expectations of immediate publication preclude ex ante
In contrast, publishing involved a small number of people
communicating information to mass or special audiences. Gatekeeping
was inherent in publishing; it was relatively closed to producers
but open to consumers. The constrained supply of content enabled ex
ante regulation of a publication (i.e., gatekeeping).
Social media are, of course, economic institutions; they need to
generate or have a prospect of generating revenue beyond the costs
of providing the service. Individuals create a user profile that
social media services in turn use to connect individuals to
others.6 Social media services often use
data gleaned from users to target advertising to
Social media thus comprise four groups of people: users who
generate content, users who consume content, users who generate
commercial speech (advertising), and social media managers who host
speech. Each element involves speech: users generate and consume
information, and social media services create the forum in which
speech happens. Individual speech is highly protected in the United
States. The online activities of social media companies also have
considerable protection from government regulation.
The United States highly values individual speech in the public
sphere. The Constitution offers strong protections for speech in
general and not just for political speech. Similarly, the right to
hear the speech of others is protected by the First
Amendment.8 American law recognizes a small
number of exceptions to these general protections for
speech.9 Apart from these exceptions,
speech by and for social media users may be presumed to be free of
government regulation. For many years, legal experts believed
commercial speech had fewer protections from government regulation
than political or artistic speech. But this lesser standing has
been challenged by the Supreme Court and recent
Economic regulation may also violate the individual’s
right to free speech. Social media companies seem to be dependent
on ordinary commercial transactions, the regulation of which is
But the exchange underlying social media is not an ordinary
commercial transaction. Individuals use social media for speech.
They are granted access to social media in exchange for data about
themselves. If government blocked (prohibited) that exchange,
speech by individuals would be restricted. The prohibition of the
economic transaction would be tantamount to prohibiting speech. The
validity of less sweeping regulations would involve discerning
their effects on speech. However, this exchange is clearly
sensitive from a First Amendment standpoint. The exchange
underlying social media thus implicates both commerce and
fundamental rights. Some part of the protection for social media
from government action derives from the protections accorded
Social media may enjoy protections from government independent
of their users’ right to free speech. The owners of the
companies involved may have First Amendment rights that preclude
government requiring a platform to carry speech.13
Publishers have a right to editorial discretion over what to
publish.14 Like publishers, platform
managers choose what will appear on their platform; after all, not
everything sent to the platform stays on it.15
Besides removing content, platform managers also rank content,
thereby affecting the likelihood it will be seen by users. Both
activities are similar to a publisher’s editorial choices and
deserve First Amendment protection.16
Yet social media platforms differ from traditional publishers.
The regulation of social media content typically comes ex post
after a user is found to have violated a platform’s community
standards. One might argue that ex post editorial decisions are
less likely than ex ante decisions to involve expression. A company
might remove existing content to create a pleasant experience for
users (that is, for business reasons). Thus, social media content
moderation might differ from editorial discretion in publishing and
merit less constitutional protection.
This objection fails on two counts. First, newspapers need to
make a profit to remain financially viable and thus might make
editorial decisions for business reasons; they would still enjoy
the freedom of the press. Moreover, an ex post content removal by
social media managers could also be expressive. For example, one
could argue that Facebook’s content moderation both expresses
and applies a conception of its online community, which in turn
favors some values over others.17
The act of moderating speech on social media would therefore be
expressive and likely protected by the Constitution.
This similarity to traditional publishers might appear to make
social media companies liable for defamation or other legal limits
that apply to publishers. However, Congress has exempted social
media from defamation standards applicable to traditional
publishers. Section 230c(1) of the Communications Decency Act of
1996 says that “no provider or user of an interactive
computer service shall be treated as the publisher or speaker of
any information provided by another information content
provider.”18 Congress adopted Section 230 for
two reasons. First, “Congress wanted to encourage the
unfettered and unregulated development of free speech on the
Internet.”19 Social media managers who were
concerned about liability for user-provided content would tend to
remove speech that both did and did not defame others. Second, the
section sought “to encourage interactive computer services
and users of such services to self-police the Internet for
obscenity and other offensive material, so as to aid parents in
limiting their children’s access to such
material.”20 Social media would here also be
free of liability for removing such content. In general, through
Section 230, Congress “sought to further First Amendment and
e-commerce interests on the Internet while also promoting the
protection of minors.”21
Section 230 also frees “online intermediaries from the need
to screen every single online post, a need that would render
impossible the real-time interactivity that people expect when they
engage on social media.”22
Congress might have required social media to police their
platforms to enforce accepted public standards for speech (e.g.,
liability for defamation). But Congress did not do so. At the same
time, the section did not protect individuals from public
accountability for violating the limited exceptions to freedom of
speech, and it did not reduce government authority over those
harms. Section 230 neither increased nor decreased government
authority over speech on social media. Congress showed in these
decisions a preference for private governance of internet
Finally, social media are privately owned forums for speech. The
First Amendment protects the freedom of speech from state action.
Social media are not government and hence are not constrained by
the First Amendment. These platforms are protected by the First
Amendment but need not apply it to speech by their users. Social
media managers may suppress speech on their privately owned
platforms, speech that elected officials could not censor in a
public forum.23 Court decisions support this
distinction between public and private power.24
But some nuances merit attention.
In older decisions, the Supreme Court said private forums had to
respect public limits regulating freedom of speech. In Marsh v.
Alabama (1946), the court ruled that a company town, like a
government, could not restrict First Amendment rights: “The
more an owner, for his advantage, opens up his property for use by
the public in general, the more do his rights become circumscribed
by the statutory and constitutional rights of those who use
it.” The fundamental rights of speech and religion outweighed
the rights of private property.25
In 1968, the court also found that a shopping mall was the
“functional equivalent” of the company town opened for
public use and thus had to respect a right to picket on private
But more recent cases have been friendlier to protecting private
forums from public regulation. In Lloyd Corp. v. Tanner
(1972), the Supreme Court overturned Logan Valley, saying a
shopping mall did not constitute a public forum and thus need not
obey the First Amendment.27
Four years later, in Hudgens v. NLRB (1976), a majority
while statutory or common law may in some situations extend
protection or provide redress against a private corporation or
person who seeks to abridge the free expression of others, no such
protection or redress is provided by the Constitution itself. This
elementary proposition is little more than a truism.28
Thus, the First Amendment’s freedom of speech clause
offered no protection for pickets making their case in a privately
owned enclosed shopping mall.
Pruneyard Shopping Center v. Robins (1980) affirmed the
proposition that being open to the public did not turn a shopping
center into a public forum governed by the First
Amendment.29 However, the decision depended
on other considerations. The court also concluded that the
California Constitution created a right to petition and speech even
on private property. The court decided that the state’s
rights trumped any claims to protection for private property or the
owner’s freedom of speech.30
This extension of free speech rights did not appeal much to other
states. Only six states beyond California adopted limited
protections for speech on private property.31
Some Americans may believe free speech should trump private
property in forums like shopping malls. But national judgments run
the other way, supporting private governance of speech. The history
of public values and social media suggests a strong presumption
against government regulation. The federal government must refrain
from abridging the freedom of speech, a constraint that strongly
protects a virtual space comprising speech. The government has also
generally refrained from forcing owners of private property to
abide by the First Amendment. Hence individuals have no expectation
that the government will force the owners of social media to
refrain from suppressing speech on their platforms (provided the
owners do not violate civil rights laws). Those who seek more
public control over social media should offer strong arguments to
overcome this presumption of private governance. Have they?
Arguments for Public Regulation of Social Media
Since 1934, Congress has required broadcast media to operate in
“the public interest.”32
That standard meant more than maximizing the size and satisfactions
of an audience. Among other goals, broadcasters were required to
carry speech they might not otherwise carry, allegedly in pursuit
of the public interest defined as “more news and information
Some idea of the public interest often undergirds government
actions, including regulation of private firms. In other words,
policymakers and others see government vindicating a public
interest through regulation. A public interest argument comprises
two parts. First, it should establish that government action is
needed to secure some widely held value; private activity is
assumed, in theory or fact, to be inadequate to achieving that end.
Second, it should make the case that government action will achieve
the values in question without significant costs to other important
values. As we have seen with social media, fundamental values are
at stake. The second part of the public interest argument must
climb a steep incline.
This section will argue that proposed government regulations of
social media fail on one or both criteria. The values pursued by
regulation are more important than the restraint of government
power. Government action is unlikely to attain the public interest
cited by advocates of regulation. Finally, in some cases, the
regulation may only attain some value at an unacceptable price in
other rights and values.
The Anti-Monopoly Argument
Some critics argue that tech companies are
monopolies.34 This claim is often confounded
with a different argument. Critics say that tech companies
fundamentally practice viewpoint discrimination when managing their
platforms. In other words, the companies are said to exclude
speakers from their forums because of their views; broadly put,
critics argue that liberal employees at tech companies discriminate
against conservatives when governing their private
If these firms are indeed monopolies, there would be a stronger
case that their content moderation violates the First Amendment.
But if these tech firms are not monopolies, then it matters much
less whether their content moderation constitutes a violation of
free speech. Private institutions discriminate among viewpoints all
the time, and few would wish the government to manage their agendas
to assure fairness or balance. Indeed, as we saw, the government
may not manage their speech (unless in theory they are broadcast
media). But if the monopoly claim is true, then bias in content
moderation might matter more. If a private forum such as Facebook
owns the only place to speak and to be heard, its discrimination
among viewpoints will seem a lot like censorship by the government,
notwithstanding its private status. If so, one might think
government action (rather than constraint) would serve the cause of
speech and debate. More voices might be heard if one company (or a
small number) did not govern the private forum in question.
However, the question of viewpoint discrimination would matter a
lot less if the dominance of current market leaders were insecure
and if users and audiences who were excluded from a platform had
alternatives. The discrimination argument also matters less if
public regulation (e.g., turning social media into public
utilities) seems likely to make matters worse regarding the
monopoly question. Each of these contingencies appears true: the
dominance of current firms is insecure, alternatives exist, and
broad regulation seems likely to make things worse.
The tech companies involved in speech are large and successful.
The Wall Street Journal reports that “these
companies [Apple, Amazon, Alphabet (Google), Microsoft, and
Facebook] are the five most valuable in the U.S. by market
capitalization, the first time a single industry has occupied that
position in several decades, according to S&P Capital
Inc.”36 Their success, however, does not
necessarily mean that they enjoy a natural monopoly, a traditional
justification for government antitrust action.
The case against the tech companies leans on an older economic
theory of network effects. David S. Evans and Richard Schmalensee
offer a pithy summary of the theory:
In some cases a service is more valuable if more customers are
using it because customers want to interact with each other. Then,
if a firm moved fast and got some customers, those customers would
attract more customers, which would attract even more. Explosive
growth would ensue and result in a single firm owning the market
forever. The winner would take all.37
Does this theory comport with what we know from economics and
with empirical reality?
The economics of network effects has turned out to be more
complicated than the older theory suggests. Internet companies
offer multisided platforms whose network effects are indirect
between different kinds of customers (say, smartphone users and app
developers) rather than direct effects between the same kind of
customers (such as telephone callers). These multisided platforms
face a much more difficult challenge of attracting customers; they
are much more likely to fail during their startup period than a
telephone company, which is the model of the older theory of
network effects. The new firms also must attend more to attracting
the right customers than simply adding customers. Finally, network
effects can go in reverse; customers may use multiple platforms and
migrate to some of the alternatives. Evans and Schmalensee remark:
“This process has happened repeatedly. AOL, MSN Messenger,
Friendster, MySpace, and Orkut all rose to great heights and then
rapidly declined, while Facebook, Snap, WhatsApp, Line, and others
quickly rose.” In general, they find that “systematic
research on online platforms … shows considerable churn in
leadership for online platforms over periods shorter than a
Although a few tech companies dominate some markets, that does
not mean these firms can never be displaced.39 A
more complex theory of network effects raises doubts about their
dominance, while recent history suggests previously dominant firms
are declining rather than continuing as monopolies.40
We have reasons to doubt that these firms will continue to dominate
Do alternatives exist for those excluded from social media
platforms? The rapid rise of social media might suggest traditional
forums for speech no longer matter, but that is far from true.
Traditional public forums continue to exist along with traditional
media such as newspapers and television. Such forums are protected,
of course, from government censorship. In fact, most people still
get most of their news from such sources.41
Fox News should be mentioned, considering that online bias has
lately been a concern of conservatives.
Even speakers excluded from major platforms such as Facebook and
YouTube can find a home for their speech somewhere else on the
internet. LiveLeak, while less reputable precisely because of its
willingness to host graphic content, will deliver video to viewers
just as effectively as YouTube. Vimeo also exists as an alternative
platform, which is owned by IAC, a firm that seemingly specializes
in second-tier versions of a host of internet services, and
Dailymotion is another option. There are also platforms that are
not specifically dedicated to video hosting or sharing but are
often used to do so. Snapchat can be used to send self-deleting
clips, while file-storage sites like Google Drive or Dropbox are
often repurposed as semipublic information repositories.
Not only is it important to recognize that alternatives exist,
but that alternatives can continue to come into existence to meet
user demand for differing standards of moderation. For several
months, YouTube rules concerning videos containing firearms have
shifted repeatedly with little transparency. The operators of gun
review channels found their videos repeatedly demonetized, and some
were banned for running afoul of opaque rulesets. A group of
firearm enthusiasts decided to start a YouTube competitor, called
Full30, which catered to the tastes of gun owners. Several popular
firearms channels on YouTube have moved to the site. While it will
never compete with YouTube as a full-spectrum video-hosting
platform, it isn’t intended to do so. Instead, it provides a
space for shooters to share videos without having to worry about
the whims of gun-shy advertisers. In a matter of months, a
firearms-friendly video-hosting site had gone from a newly
discovered market niche, revealed through a dispute between YouTube
and some of its users, to a functional video-hosting
But even if firms dominate their markets, will government
regulation deal with the problem or make it worse? The nation has
had experience with similar regulation of communications for
similar reasons regarding broadcasting. Such regulation reinforced
the market dominance of large firms and threatened freedom of
The federal government claimed control of the broadcasting
spectrum in the 1920s. Congress set up the Federal Communications
Commission (FCC) to regulate entry into and use of the spectrum.
Between 1941 and 1953, the FCC allocated a large part of the
spectrum for television broadcasting. To use the spectrum for
television broadcasting required a license from the FCC.
According to economist Thomas Hazlett, the agency “sought
to carefully plan the structure and form of television
service.” It also severely limited the number of competing
stations, which drove up the value of the licenses. Such
restrictions inevitably constrain competition for the benefit of
incumbent firms. The successful licensees did not pay for the
permission to use the spectrum, although they did agree to
broadcast “in the public interest.” Hazlett notes that
“this plan dictated technology and erected insurmountable
barriers to entry.” The regulatory effort thus created
monopoly power for the regulated.43
Hazlett quotes a later expert assessment: “The effect of this
policy has been to create a system of powerful vested interests,
which continue to stand in the path of reform and
Indeed, the Radio Act of 1927 reduced the number of stations by
a quarter, affecting “stations owned almost entirely by small
businesses or nonprofit organizations like schools or labor
unions.”45 Moreover, on five occasions from
1962 to 1970 the FCC protected broadcast companies from competition
by cable television.46
Eventually, the courts became less tolerant of the FCC’s
hostility to competition and the agency itself became more open to
markets in the 1980s. Those changes, however, came after decades of
the FCC restricting competition in broadcasting markets.
The effects of FCC regulation on freedom of speech may be
summarized briefly. The FCC’s Fairness Doctrine required a
broadcaster to offer equal time to respond to a position taken on
air. The equal time would not be reimbursed, which meant the
requirement acted as a tax on the original broadcast speech. Small
radio stations criticized the Kennedy administration’s
Limited Nuclear Test Ban Treaty with the Soviet Union. Such
stations could not afford to supply a great deal of free time.
Kennedy operatives arranged for more than 1,000 letters to be
written demanding equal time at these stations, leading to 1,678
hours of free broadcasting. This effort to suppress speech was
deemed a success by the administration and continued in the Johnson
Richard Nixon’s administration sought to use broadcast
regulation against the television networks. Administration
officials threatened the local licenses of the networks both
publicly and privately, seeking more favorable coverage. The public
effort appeared to fail, but Thomas Hazlett has shown that
privately network officials were quite compliant with the wishes of
The history of broadcast regulation suggests that increasing
state control over social media would have a chilling effect on
speech. Over time, both political parties might be expected to
threaten any speech they find abhorrent.
The monopoly argument for regulating social media has
weaknesses. We have reason to think the current market positions of
large social media companies may not persist because network
effects operate differently than in the past. In any case, speakers
have alternatives if they are excluded from a specific platform.
Finally, it should not be assumed that government regulation will
produce more competition in the online marketplace of ideas. It may
simply protect both social media owners and government officials
Democracy and Deliberation
Consider how political speech works in the world outside the
internet. People have views about politics. They associate with
others to discuss and perhaps debate those views. Yet associations
of the like-minded are no doubt more likely than the debate club.
Perhaps they seek out others with similar views because Americans
do not like conflict and confrontation.49
The association of the like-minded, now disparaged as a filter
bubble, existed long before the internet. Such associations
reflected other human failings such as confirmation bias and
prejudice. This tendency no doubt did harm to society: debates were
less rich and less probing than they otherwise might have been, and
citizens were worse off than they might have been if they had
learned the errors of their ways through a fuller debate. Yet few
called for the government to compel associations to hear speakers
with different views.
The internet facilitated the exchange of views about everything,
including politics. Once again, groups of the like-minded have
formed. Indeed, the cost of speech and association has fallen so
fast that we might expect that more people will be more involved in
more like-minded groups than ever. One might see this as a
tremendous success both in fostering speech and association or in
satisfying individual preferences. But some see the association of
the like-minded as a danger to democracy and to the individuals who
associate this way.
Cass Sunstein, perhaps the most important contemporary critic of
speech and association on the internet, argues that “we
should evaluate communications technologies and social media by
asking how they affect us as citizens, not only by asking how they
affect us as consumers.” A central question is whether
emerging social practices, including consumption patterns, are
“promoting or compromising our own highest
aspirations.”50 The government might help them
achieve “our” (but not necessarily “their”)
Our highest aspirations are said to include meeting the demands
of citizenship in a deliberative democracy. Sunstein denies that
freedom in general means freedom from coercion by the state.
Instead, freedom is understood as the development (including the
self-development) of the individual. Freedom for Sunstein is a
variation of the freedom of the ancients: it comes from collective
action by citizens in a republic, which in turn “requires
exposure to a diverse set of topics and
Individuals might be free of state coercion yet unfree because
they make choices that preclude their own development. Individuals
may prefer, for example, to hear only a subset of all views about a
political topic. Indeed, they may prefer to hear little about
politics. Sunstein offers an extended argument that people online
pursue their own interests to the exclusion of public information
and debates. Particularly, they do not come across ideas and
arguments they might not seek out. Instead, they form bubbles that
filter out opposing views and echo chambers that merely repeat the
views already held by the individuals in them. In contrast, more
traditional media like newspapers and television “expose
people to a range of topics and views at the same time that they
provide shared experiences for a heterogeneous
public.”53 Ultimately, for Sunstein such
exposure and such shared experiences are essential to foster the
kind of deliberative democracy sought by the American
If people prefer associating with like-minded people on the
internet, Sunstein worries that more than political aspirations may
be harmed. Associating this way creates “a large risk of
group polarization, simply because it makes it so easy for
like-minded people to speak with one another—and ultimately
move toward extreme and sometimes even violent
positions.”55 Obviously violence might follow
polarization. It is even possible, Sunstein notes, that social
stability could be put at risk.56
This would obviously involve public values on par with freedom of
speech. But the real problem seems more prosaic and political: If
diverse groups are seeing and hearing quite different points of
view or focusing on quite different topics, mutual understanding
might be difficult and it might be increasingly hard for people to
solve problems that society faces together.57
By enabling and respecting individual choices, the internet
complicates and even undermines both the diversity and the unity
needed in a deliberative democracy. More diversity of views would
improve the disharmony of the internet enclaves, and more unity
across enclaves would militate against social and political
Sunstein’s claims about filter bubbles and echo chambers
seem plausible. We can imagine people choosing to avoid unpleasant
people and views while affirming their prior beliefs. Such choices
might be the easiest way forward for them. But the logic of this
position does not entail its empirical accuracy. Communications
researcher Cristian Vaccari notes that
social media users can make choices as to which sources they
follow and engage with. Whether people use these choice affordances
solely to flock to content reinforcing their political preferences
and prejudices, filtering out or avoiding content that espouses
other viewpoints, is, however, an empirical question—not a
destiny inscribed in the way social media and their algorithms
In fact, abundant research casts doubt on Sunstein’s claim
that individual choices on the internet are turning the nation into
a polarized, possibly violent dystopia. For example, several
studies published in 2016 and earlier indicate that people using
the internet and social media are not shielded from news
contravening their prior beliefs or attitudes.59
In 2014, experimental evidence led two scholars to state that
“social media should be expected to increase users’
exposure to a variety of news and politically diverse
information.” They conclude that “the odds of exposure
to counter attitudinal information among partisans and political
news among the disaffected strike us as substantially higher than
interpersonal discussion or traditional media
venues.”60 A 2015 paper found that
“most social media users are embedded in ideologically
diverse networks, and that exposure to political diversity has a
positive effect on political moderation.” Contrary to the
received wisdom, this data “provides evidence that social
media usage reduces mass political
A broad literature review in 2016 found “no empirical
evidence that warrants any strong worries about filter
bubbles.”62 Just before the 2016 election, a
survey of U.S. adults found that social media users perceive more
political disagreement than nonusers, that they perceive more of it
on social media than in other media, and that news use on social
media is positively associated with perceived disagreement on
Did the 2016 election change these findings? Several studies
suggest doubts about filter bubbles, polarization, and internet
use. Three economists found that polarization has advanced most
rapidly among demographic groups least likely to use the internet
for political news. The cause (internet use) was absent from the
effect of interest (increased polarization).64
Three communications scholars examined how people used Facebook
news during the 2016 U.S. presidential campaign. They had panel
data and thus could examine how internet usage affected the
attitudes of the same people over time. The results suggest
Sunstein’s concerns are exaggerated. Both internet use and
the attitudes of the panel “remained relatively
stable.” There was also no evidence for a filter bubble. The
people who used Facebook for news were more likely to view news
that both affirmed and contravened their prior beliefs. Indeed,
people exposed themselves more over time to contrary views, which
“was related to a modest … spiral of
depolarization.” In contrast, the researchers found no
evidence of a filter bubble where exposure to news affirming prior
attitudes led to greater polarization.65
Several recent studies have focused on either the United States
and other developed nations or just European nations alone. Perhaps
data and conclusions from other developed nations do not transfer
to the United States. However, cultures and borders
notwithstanding, citizens in developed nations are similar in
wealth and education. Even if we put less weight on conclusions
from Europe, such results bear more than modest consideration.
In 2017, Vaccari surveyed citizens in France, Germany, and the
United Kingdom to test the extent of filter bubbles online. He
concluded that “social media users are more likely to
disagree than agree with the political contents they see on these
platforms” and that “citizens are much more likely to
encounter disagreeable views on social media than in face-to-face
conversations.” His evaluation of Sunstein’s thesis is
Ideological echo chambers and filter bubbles on social media are
the exception, not the norm. Being the exception does not mean
being non-existent, of course. Based on these estimates, between
one in five and one in eight social media users report being in
ideological echo chambers. However, most social media users
experience a rather balanced combination of views they agree and
disagree with. If anything, the clash of disagreeing opinions is
more common on social media than ideological echo
Another recent study of the United Kingdom found that most
people interested in politics who select diverse sources of
information tended to avoid echo chambers. Only about 8 percent of
their sample were in an echo chamber. The authors urge us to look
more broadly at media and public opinion:
Whatever may be happening on any single social media platform,
when we look at the entire media environment, there is little
apparent echo chamber. People regularly encounter things that they
disagree with. People check multiple sources. People try to confirm
information using search. Possibly most important, people discover
things that change their political opinions. Looking at the entire
multi-media environment, we find little evidence of an echo
chamber.67 Finally, another study of
multiple countries found that social media was also related to
incidental exposure to news, contrary again to Sunstein’s
view that older media promoted such unintended exposure while new
media do not.68
For Sunstein, the aggregate of individual choices about
political speech and engagement on the internet does not serve well
the cause of republicanism. A proper culture for deliberative
democracy “demands not only a law of free expression but also
a culture of free expression, in which people are eager to listen
to what their fellow citizens have to say.”69
He also believes that “a democratic polity, acting through
democratic organs” may help foster such a culture by creating
“a system of communications that promotes exposure to a wide
range of issues and views.”70
In this regard, Sunstein follows an older view that sees the
First Amendment and the Constitution enabling government to
regulate speech to attain a “richer public
debate.”71 That older view of activist
government called for limits on the autonomy of some speakers to
improve public deliberation.72
In contrast, Sunstein does not believe citizens should be
“forced to read and view materials that they
abhor.”73 Yet he clearly believes that
public officials acting at the behest of majorities should have the
power to expose individuals to materials they would not choose to
see on their own: to nudge, not coerce, Americans for a worthy end
that they would not choose on their own.74
Sunstein’s proposed reforms for the internet seem
restrained in light of his critique of social media. He remarks,
“I will be arguing for the creative use of links on the
Internet, although I will not suggest, and do not believe, that the
government should require any links.”75
He denies that government should mandate linking to a variety of
sites with different opinions to achieve the public good of better
debates.76 Sunstein notes the good
consequences of ending the Fairness Doctrine and does not advocate
restoring it for any media, including the internet, even though he
believes its demise probably increased fragmentation and
We have seen that Sunstein’s concerns about filter bubbles
are open to question. Sunstein might disagree, and perhaps a
maturing literature will support regulations to fight such bubbles.
But Sunstein proposes restrained efforts to make internet users
better citizens. If the literature cited in this report is correct,
there is even less reason to regulate social media in the name of
democracy. But there may be an inherent problem implicitly
recognized in Sunstein’s limited proposals for reform.
Forcing people to read and interact with views they dislike or
abhor implicates liberal values such as free speech and individual
liberty. On the one hand, we may wish that people were more and
better informed about politics; on the other hand, we may doubt the
wisdom of forcing people to engage in public matters. If filter
bubbles threatened popular government, the case for public action
might have improved. But studies do not support that
National Security Concerns
Government traditionally protects the homeland from its enemies.
A standard textbook explores the complex meaning of national
The term national security refers to the safeguarding of a
people, territory, and way of life. It includes protection from
physical assault and in that sense is similar to the term defense.
… In one definition the phrase is commonly asserted to mean
“physical security, defined as the protection against attack
on the territory and the people of the United States in order to
ensure survival with fundamental values and institutions intact;
promotion of values; and economic prosperity.”78
Our concern here is whether the government should increase its
power over internet speech to achieve national security. That
question inevitably concerns the relationship of speech to
violence. But internet speech may involve other aspects of national
Terrorism. The clearest example of a threat to
national security would be attacks on or occupation of the homeland
or its citizens. Terrorism may be defined as “public violence
to advance a political, social, or religious cause or
ideology.”79 Policymakers thus have reason
Terrorist groups … use the Internet to disseminate their
ideology, to recruit new members, and to take credit for attacks
around the world. In addition, some people who are not members of
these groups may view this content and could begin to sympathize
with or to adhere to the violent philosophies these groups
advocate. They might even act on these beliefs.80
Speech is not violence, but a speaker can “intend to
incite a violent or lawless action” that might be likely
“to imminently occur as a result” of the
speech.81 Such speech is not protected by
the First Amendment, and the government may act to restrict or
punish it. However, the Supreme Court’s leading decision on
incitement to violence also states that “the mere abstract
teaching … of the moral propriety or even moral necessity for a
resort to force and violence is not the same as preparing a group
for violent action.” Such speech cannot be punished by the
government in a manner consistent with the First
Amendment.82 Terrorist speech that seeks to
persuade rather than direct would likely escape censorship.
Some have argued for stricter limits on terrorist speech. For
example, Eric Posner, a law professor at the University of Chicago
Law School, has proposed a law that would make it a crime to visit
“websites that glorify, express support for, or provide
encouragement for ISIS or support recruitment by ISIS; to
distribute links to those websites or videos, images, or text taken
from those websites; or to encourage people to access such websites
by supplying them with links or instructions.”83
(Presumably the same would apply to other terrorist groups.) He
argues that the law would prevent naïve individuals from being
drawn into supporting terrorism and thereby preclude deadly
attacks. Posner concedes that his proposed law violates the First
Amendment under current doctrine. However, he is hopeful that the
current war on terror will permit new restrictions on speech that
would have been held invalid in less demanding times. In the past,
he remarks, war has supported such restrictions.
David Post, another American legal scholar, has noted some
problems with Posner’s proposal. He argues that the history
of suppressing speech during wartime has often later been judged to
be “deeply misguided, counterproductive, and often
shameful.” Post suggests that ambiguous terms such as
“glorify,” “support,” and
“encourage” may be interpreted to suppress legitimate
dissenting speech.84 According to Posner, the work of
noted First Amendment scholar Geoffrey Stone has established that
war and speech suppression go together, but Posner does not mention
that Stone believes the government’s actions were almost
Posner seems concerned about two harms caused by speech that
favors terrorism: the harm done to vulnerable individuals who end
up being punished for materially supporting terrorism and the
mayhem caused by the speech. Liberal governments generally do not
protect people from the consequences of their beliefs; however,
they do protect other people from those consequences if they are
directly related to speech. Hence Posner rightly worries about
violence caused by terrorist speech, a concern that informs the
incitement exception in First Amendment doctrine. But his example,
which is that 300 U.S.-based ISIS sympathizers were
“lured” by Twitter into some affiliation with the
group, does not establish that any harm was perpetrated by anyone
in the group against other people. There is little doubt they might
cause harm in the future, but we have no evidence they have done so
because of hearing speech. Posner is proposing a revived “bad
tendency” test that weighs free speech against possible harms
carried out by beguiled people. One would think we need at least a
few cases in which speech probably caused harm to change
Courts have consistently refused to hold social media platforms
liable for terrorist acts.85
The most plausible of these attempts, Fields v. Twitter,
sought to make use of the civil remedies provision of the
Anti-Terrorism Act (ATA), contending that in failing to prevent
ISIS from using its platform Twitter knowingly and recklessly
provided material support to a terrorist organization, rendering
Twitter the proximate cause of harms suffered by ISIS victims.
Suits brought under the ATA turn on its proximate cause or
“by reason of” requirement. Under this requirement
Fields and similar material-support claims falter. While
ISIS certainly found value in the ability to tweet, it is unlikely
that the organization’s activities would be substantially
hampered without access to Twitter. Furthermore, in Fields
and similar cases, plaintiffs failed to demonstrate that
ISIS’s use of Twitter played an instrumental role in the
attacks that victimized them. The District Court for the Northern
District of California noted the following in Fields:
The allegations … do not support a plausible inference of
proximate causation between Twitter’s provision of accounts
to ISIS and the deaths of Fields and Creach. Plaintiffs allege no
connection between the shooter, Abu Zaid, and Twitter. There are no
facts indicating that Abu Zaid’s attack was in any way
impacted, helped by, or the result of ISIS’s presence on the
Given the plaintiffs’ failure to establish ISIS’s
Twitter use as the proximate cause of their harms, the Ninth
Circuit rejected Fields’ appeal.
More broadly, any standard of liability that might implicate
Twitter in terrorist attacks would also capture transport
providers, restaurateurs, and cellular networks. All these services
are frequently used by terrorists, though they cannot be seen as
uniquely instrumental in the realization of terrorist plots.
A Twitter account is not an unalloyed boon to terrorists. A
public social media presence provides opportunities for
counterspeech and intelligence gathering. In some cases, state
security services have asked social media platforms to refrain from
removing terrorist accounts, as they provide valuable information
concerning the aims, priorities, and sometimes the locations of
As Posner notes, social media platforms have policies against
terrorist speech. For example, YouTube’s policy on
“terrorist content” states:
We do not permit terrorist organizations to use YouTube for any
purpose, including recruitment. YouTube also strictly prohibits
content related to terrorism, such as content that promotes
terrorist acts, incites violence, or celebrates terrorist
Facebook and Twitter have similar policies, though they attempt
to limit the subjectivity of terrorism by tying it to violence
against civilians.89 Twitter’s policy
You may not make specific threats of violence… . This
includes, but is not limited to, threatening or promoting
terrorism. You also may not affiliate with organizations
that—whether by their own statements or activity both on and
off the platform—use or promote violence against civilians to
further their causes.90
Social media moderation may be more effective than the increases
in government power desired by Posner. But that effectiveness may
have been acquired by narrowing the kinds of speech heard on those
platforms. However one assesses that narrowing, the case for more
government power here remains at best unproven.
Electoral Integrity. Many believe that
protecting national security also means preventing foreign powers
from influencing American elections. In February 2018, Robert S.
Mueller III, a special counsel to the U.S. Department of Justice,
indicted 13 Russians for intervening in the 2016 U.S. election. The
indictment charges that they intervened by purchasing advertising
that mostly “focused on issues—like civil rights or
immigration—and did not promote specific
candidates.”91 In other words, the ads were
speech about issues discussed during the campaign. Of course, this
was speech by foreign agents, which presumably makes all the
difference. But should it? The First Amendment does not refer to
speakers but rather to speech, which is protected from
government abridgment. It might be assumed that foreign agents seek
to do harm to the United States through speech. Indeed,
Mueller’s indictment and the law underlying it purport to
protect national security from foreign actions during the
election.92 But that harm could be ignored
or rejected by internet users who are expected under the
Constitution to act as a censor of dangerous speech. Lastly, social
media users may have a right to receive materials from foreign
speakers. Such a right would belong to a reader or listener rather
than the speaker; the Russian speakers in this case have no right.
In 1965, the Supreme Court invalidated a law requiring readers to
sign at the Post Office to receive communist publications. The act
of signing chilled a presumed right to receive a publication from
abroad.93 In a concurring opinion, Justice
William Brennan remarked:
It is true that the First Amendment contains no specific
guarantee of access to publications. However, the protection of the
Bill of Rights goes beyond the specific guarantees to protect from
congressional abridgment those equally fundamental personal rights
necessary to make the express guarantees fully meaningful … I
think the right to receive publications is such a fundamental
right. The dissemination of ideas can accomplish nothing if
otherwise willing addressees are not free to receive and consider
them. It would be a barren marketplace of ideas that had only
sellers and no buyers.94
The Peking Daily was printed on paper; Russian speech
appeared online. But the difference does not merit protecting a
reader’s right to see the former and not the latter.
In this case, national security seems to have outweighed freedom
of speech. But that conclusion is somewhat misleading. In fact, the
United States both censors some foreign speech and permits other
speech with disclosure of the source. We turn first to the case for
Why were the Russian efforts indictable? The indicted were
supported by the Russian government and are said to have bought ads
advocating the election or defeat of candidates for federal office.
The relevant law comes from Title 11, Section 110.20(b) of the Code
of Federal Regulations:
A foreign national shall not, directly or indirectly, make any
expenditure, independent expenditure, or disbursement in connection
with any Federal, State, or local election.95
The Federal Election Commission (FEC) concisely explicates the
Foreign nationals are prohibited from the following activities:
Making any contribution or donation of money or other thing of
value, or making any expenditure, independent expenditure, or
disbursement in connection with any federal, state or local
election in the United States; … [and] making any disbursement
for an electioneering communication.96
The ads could have been a “thing of value,” an
“independent expenditure,” or “disbursement for
an electioneering communication.”
Yet the law may not proscribe all Russian speech concerning
American elections. The FEC notes that a federal district court has
said that the “foreign national ban ‘does not restrain
foreign nationals from speaking out about issues or spending money
to advocate their views about issues. It restrains them only from a
certain form of expressive activity closely tied to the voting
process—providing money for a candidate or political party or
spending money in order to expressly advocate for or against the
election of a candidate.’”97
Here we discern a descent toward incoherence. On the one hand, a
foreign national is prohibited from spending money on American
elections, including advertising. On the other hand, that ban
extends only to spending on express advocacy for or against a
candidate. As noted earlier, most of the spending by the Russians
in 2016 involved issue advocacy and not express advocacy. What
about internet speech by Russian agents? They were presumably paid
to speak, so the same distinction applies; speech about the issues
would be permitted. Of course, speech about issues by a foreign
national “volunteer” would not be indictable.
The national security interest at stake appears to be the
integrity of a fundamental institution—elections. A foreign
national spending money on speech that advocates the election or
defeat of a candidate apparently threatens the integrity of
elections. A foreign national discussing the issues debated during
an election does not pose the same threat. This distinction may
belie an assumption that using money to support speech would enable
a foreign power to coordinate direct influence over voters and
thereby affect the outcome of an election. Observations by random
foreign nationals would not likely be effective.
But what’s entailed in that assumption is that without
censorship voters and their votes would be affected by the foreign
campaign to the detriment of national security. The assumption is
paternalistic and contravenes many of the justifications for
freedom of speech found in Supreme Court decisions about freedom of
speech. At the same time, voters are assumed to be capable of
dealing with issue advocacy by foreign nationals. So there is a
tension here that reflects badly yet well on the United States.
Speech by foreign nationals is not just a threat to
national security. If it were only a threat, that threat would be
countered by banning all foreign speech. But speech by foreign
nationals also offers benefits to Americans, so banning all foreign
speech would involve significant costs. For this reason, foreign
speech is often regulated but not prohibited.
The FEC notes that the ban on spending on ads by foreign
nationals “was first enacted in 1966 as part of the
amendments to the Foreign Agents Registration Act (FARA), an
‘internal security’ statute. The goal of the FARA was
to minimize foreign intervention in U.S. elections by establishing
a series of limitations on foreign nationals.”98
FARA prohibited some speech, but it also permitted speech by
foreigners under certain conditions. FARA required agents of
foreign powers to register with the federal government; in short,
people who are paid by a foreign government must disclose that
This more liberal approach to foreign speech may be seen in
FARA’s statement of purpose:
To protect the national defense, internal security, and foreign
relations of the United States by requiring public disclosure by
persons engaging in propaganda activities and other activities for
or on behalf of foreign governments, foreign political parties, and
other foreign principals so that the Government and the people of
the United States may be informed of the identity of such persons
and may appraise their statements and actions in the light of their
associations and activities.99
The law also requires “that informational materials
(formerly propaganda) be labeled with a conspicuous statement that
the information is disseminated by the agents on behalf of the
foreign principal.” The Department of Justice says disclosure
is required of people “attempting to influence U.S. public
opinion, policy, and laws.”100
Policy and law are likely the most important contexts for the
speech of foreign agents. Foreign governments, acting on behalf of
their citizens, need not represent only the interests of
foreigners. For example, an exporting nation might wish to make the
case against American protectionism. Note that such advocacy might
also favor consumers in the United States. Such speech hardly
threatens U.S. national security; indeed, it may even serve the
general welfare of the nation.101
In other cases, the interests of governments and peoples
diverge, and the speech of foreign agents may run counter to the
interests of the American people. Even though this speech could be
divergent and a potential security threat, it does not require
censorship. Diplomacy requires people employed by foreign powers to
speak with policymakers on behalf of their governments. Public
officials, including members of Congress and the executive branch,
often meet and hear the arguments of foreign agents. Apparently,
registering and thereby disclosing such agents sufficiently
protects American security in those situations.
Censorship is also apparently not necessary to protect public
opinion. The TV channel RT, which is funded by the Russian
government, has been required to register as a foreign
agent.102 RT offers general news and
information to a small number of viewers.103
Presumably RT seeks mostly to influence public opinion, though it
might thereby affect policy or law. The content of the speech on RT
might be similar to or even the same as an advertisement purchased
or speech otherwise uttered by a foreign agent. Even though RT is
funded by the Russian government, it was required to register as a
foreign agent rather than go silent. Apparently, voters can sort
out the propaganda on a television network funded by the Russian
government but not the advertising paid for by it.104
In sum, American law permits some speech by foreign nationals
during an election. The law may permit issue advocacy by foreign
nationals. It does not permit foreign nationals to spend money
directly on elections, especially by buying advertising that
supports or opposes a candidate.
There is little evidence that the Russian efforts had much
effect on the American voters in 2016. Reporting by the New
York Times suggests that Russian efforts may have persuaded a
few people to show up at a small anti-Muslim rally in
Texas.105 Speculation about other effects
abounds. But as Brendan Nyhan, a professor of public policy at the
University of Michigan, indicates, political science research shows
how hard it is to change votes even with significant
spending.106 The Russian effort was a
minuscule portion of overall spending in 2016.107
Moreover, as Ross Douthat notes, much of the Russian effort
“did not introduce anything to the American system that
isn’t already present; it just reproduced, often in lousy or
ludicrous counterfeits, the arguments and images and rhetorical
tropes that we already hurl at one another every
day.”108 At the margins, the Russians
added “divisive” speech, but the increment was
relatively trivial. And divisive speech is not illegal for
Americans. In this case, however, the Russian money made the speech
Federal law seems needlessly incoherent. Allowing foreign
nationals to buy ads with disclosure of their participation would
vindicate freedom of speech. It might be objected that allowing
such spending would permit a hostile foreign power to fund and
coordinate a propaganda campaign capable of affecting the outcome
of an American election. But allowing foreign nationals to fund
lobbying efforts has not subjugated policymaking to foreign
interests. Policymakers are assumed to be capable of sorting out
arguments and interests. Perhaps voters are not as capable of doing
so, although the unpopularity of RT suggests otherwise.
The Mueller indictment may never move to trial because the
indicted are unlikely to come to the United States. But even if you
believe Americans should be protected from Russian ads, there is
little need for federal action.
A Private Alternative
The social media company most affected by the Russian efforts is
regulating itself. Mark Zuckerberg, the founder and CEO of
Facebook, lists “defending against election
interference” as one of the three most important issues
facing his company.109 According to Zuckerberg,
foreign nations set up fake accounts to enable “coordinated
information operations … spreading division and
misinformation.” Facebook is using machine learning to
identify and remove such accounts. Zuckerberg argues that the
accounts are being removed not because of the content of their
speech but because they violate Facebook’s Community
Standards, which require that an account have an authentic user. He
notes that this policy involves both false positives (some accounts
with authentic users are taken down) and false negatives (some fake
accounts stay up).
Facebook is also enforcing disclosure on ad buyers:
Facebook now has a higher standard of ads transparency than has
ever existed with TV or newspaper ads. You can see all the ads an
advertiser is running, even if they weren’t shown to you. In
addition, all political and issue ads in the US must make clear who
paid for them. And all these ads are put into a public archive
which anyone can search to see how much was spent on an individual
ad and the audience it reached.
The disclosure aims specifically at excluding expenditures by
We now also require anyone running political or issue ads in the
US to verify their identity and location. This prevents someone in
Russia, for example, from buying political ads in the United
States, and it adds another obstacle for people trying to hide
their identity or location using fake accounts.
Facebook appears to be offering a private solution to the
perceived threat to the integrity of elections and hence national
security. No doubt some Russian accounts will escape the ban on
fake accounts. But in this regard Facebook seems much better placed
than the federal government to regulate Russian efforts. The
private sector is doing what the public sector cannot and should
What about freedom of speech? Facebook’s efforts do not
suppress speech on the basis of its content. The rule against
accounts that do not identify their owner or location does not
implicate the content of speech. Facebook requires disclosure of
the funders of all political ads, including ads about issues only.
In contrast, federal election law exempts some groups and
individuals from disclosing funding for issue ads.110
Facebook’s disclosure rules are thus less liberal than
federal election law.
This broad sweep of disclosure may be a response to the content
of speech. Zuckerberg writes:
Most of the divisive ads the Internet Research Agency ran in
2016 focused on issues—like civil rights or
immigration—and did not promote specific candidates. To catch
this behavior, we needed a broad definition of what constitutes an
So the broad definition is a response to what? The divisiveness
of the speech? Or the source of the speech? To the extent the rule
seeks the source, it is roughly similar to federal law governing
prohibited sources of funding. But if divisiveness leads to
disclosure, Facebook is regulating, though not suppressing, speech
based on its content.
But are Facebook’s efforts truly a private
decision—thus exempt from the strictures of the First
Amendment—or the result of political pressure? Congress has
been concerned about Russian internet efforts during the 2016
election. These concerns have led some members to threaten to
impose regulation on tech companies.111
Mark Zuckerberg has also emphasized that Facebook is working
closely with governments to prepare for elections.112
Whether Facebook’s regulation of speech on its platform is a
public-private undertaking and whether such undertakings should be
constrained by the First Amendment remain open questions.
Preventing Harms Caused by Speech
Social media comprise speech and little else. For that reason,
as noted earlier, social media are largely immune from government
regulation; they benefit from the priority given to private
judgment in these matters. Despite this protection, it might still
be considered valid for the government to manage speech on social
media if such regulations were narrowly tailored to achieve a
compelling government interest; in other words, the regulation
would pass muster under the “strict scrutiny” test the
courts apply to restrictions on fundamental rights.113
Here I examine two potential compelling government interests rooted
in widely held public values: preventing the harms caused by
“fake news” and “hate
The cases for regulation of both kinds of speech have a common
weakness. If we do not know what a term means, we cannot know how
it applies. Thus, vagueness fosters unconstitutionality, as Nadine
The Supreme Court has held that any law is “unduly
vague,” and hence unconstitutional, when people “of
common intelligence must necessarily guess at its meaning.”
This violates tenets of “due process” or fairness, as
well as equality, because such a law is inherently susceptible to
arbitrary and discriminatory enforcement. Moreover, when an unduly
vague law regulates speech in particular, the law also violates the
First Amendment because it inevitably deters people from engaging
in constitutionally protected speech for fear that they might run
afoul of the law. The Supreme Court has therefore enforced the
“void for vagueness” doctrine with special strictness
in the context of laws that regulate speech.115
Looked at another way, vagueness would lead government to
suppress both prohibited and permitted speech. The “false
positives,” which are permitted speech wrongly suppressed,
would involve a cost to freedom of speech. Given the importance
attached to free speech in the United States, it is unlikely the
benefits of suppressing speech would outweigh the costs of those
false positives. Such costs would also indicate the chosen means
were poorly tailored to the ends sought by the government;
vagueness would suggest the government regulation of speech could
not pass a strict scrutiny test.
For these reasons, the following analysis pays close attention
to the meanings of fake news and hate speech. So far as we find the
terms vague, we should have less confidence in calls to suppress
such speech, however “fake” and however
What is fake news? The term has been used to refer to
“satirical news, hoaxes, news that’s clumsily framed or
outright wrong, propaganda, lies destined for viral clicks and
advertising dollars, politically motivated half-truths, and
more.”116 The term fake news has come to
public attention relatively recently. The relevant linguistic
community might be working toward a clear definition. A recent
European Commission Working Paper examines several definitions of
fake news.117 Some common elements of these
various definitions might suggest a clear definition of the
- “False, often sensational, information disseminated under
the guise of news reporting” (Collins Dictionary).
- “(1) News that is made up or ‘invented’ to
make money or discredit others; (2) news that has a basis in fact,
but is ‘spun’ to suit a particular agenda; and (3) news
that people don’t feel comfortable about or don’t agree
with” (Reuters Institute, “Digital News Report
- “All forms of false, inaccurate, or misleading
information designed, presented and promoted to intentionally cause
public harm or for profit” (European Commission, A
Multidimensional Approach to Disinformation).
- “Verifiably false or misleading information that is
created, presented and disseminated for economic gain or to
intentionally deceive the public, and in any event to cause public
harm” (European Commission, “Tackling Online
Disinformation: Commission Proposes An EU-wide Code of
- “Perceived and deliberate distortions of news with the
intention to affect the political landscape and to exacerbate
divisions in society” (European Commission, “Joint
Research Centre Digital Economy Working Paper 2018-02”).
Some elements of these definitions clearly could not pass muster
under American constitutional law. Speech that fits a particular
agenda, that makes people uncomfortable, or that affects the
political landscape would all be protected by American courts.
Apart from that, the term fake news appears to comprise three
elements: intentionality, falsity, and a public harm. Each of these
elements poses serious problems to the First Amendment.
Apparently, only those who deliberately seek to mislead or
divide listeners are liable for false or harmful speech, according
to those who seek to regulate fake news.
But this intentionality standard itself does not work well with
the remaining speech. If the government may not suppress false
speech or speech that causes a public harm, then whether the speech
is intended to cause a public harm does not matter. So we turn
first to whether false and harmful speech may be sanctioned by the
The falsity of speech refers to its content. Generally,
governments in the United States may not prohibit or sanction
speech because of its content.118
However, some exceptions exist to this general rule: incitement,
obscenity, defamation, speech integral to crimes, child
pornography, fraud, true threats, “fighting words,” and
“speech presenting some grave and imminent threat the
government has the power to prevent.”119
These exceptions, the Supreme Court says, have a historical
foundation in the court’s free speech
In United States v. Alvarez, the court refused to
recognize a general exception to the First Amendment for false
speech: “The Court has never endorsed the categorical rule
the Government advances: that false statements receive no First
In part, false speech was not a traditionally recognized exception.
Also, giving the government the power to limit speech on behalf of
truth would chill permitted speech, thereby calling into question
“a foundation of our freedom.”122
False speech about politics enjoys significant protection under the
Courts have long recognized defamation as a general exception to
the freedom of speech.123 The government may sanction
speech integral to defamation. Individuals defamed by others on
social media may seek relief for a tort; the state then enforces
the sanction on libelous speech. State sanctions require both harm
to reputation and falsity; the exception is not for false speech
per se.124 Public figures must also show
“actual malice” by defendants to recover
compensation.125 Falsity alone is not enough to
allow punishment for speech. The standard of actual malice is quite
demanding on those seeking relief. Moreover, Section 230 of the
Communications Decency Act of 1996 prevents social media platforms
from being liable for the torts of their users.126
Defamation will provide a limited public response to fake
news.127 It is not broad enough to
underpin a substantial government effort to regulate false speech
on social media.
What other public harms are said to be caused by fake news? The
public believes fake news causes “a great deal of confusion
about the basic facts of current issues and
events.”128 Such confusion might cause a
larger problem. Many consumers now view news online. This change
“has lowered costs and expanded market reach for news
producers and consumers.” But the shift has also separated
news producers (editors) and distributors (curators). Distribution
is now being performed by algorithmic advertising and distribution
platforms such as search engines, news aggregators, and social
media sites. It is argued that editors cared about their reputation
for quality news, but now the new distributors seek maximum traffic
and advertising revenue instead. A European Commission study group
suggests that these developments
may weaken consumer trust and news brand recognition and
facilitate the introduction of disinformation and false news into
the market. This may contribute to news market failure when it
becomes difficult for consumers to distinguish between good quality
news and disinformation or fake news.129
News consumers might consume less news to avoid false or
The outcome might be an ill-informed electorate less willing to
grant legitimacy to the government.
This market failure argument for regulating fake news lacks
empirical support. Scholarly literature notes that social media
have offered both costs and benefits to consumers. However,
“the impact of all these changes on the welfare of consumers,
producers and society as a whole is not so clear. There is
very little empirical evidence to date, also because relevant data
are often proprietary and not accessible to independent
researchers” (emphasis added).131
In light of this, we have little reason to believe that by
regulating fake news the government will serve the important
interests said to be at stake.
Private content moderators permit false speech. However, they
manage such speech much more efficiently than the government.
It’s important to note that whether or not a Facebook post
is accurate is not itself a reason to block it. Human rights law
extends the same right to expression to those who wish to claim
that the world is flat as to those who state that it is
round—and so does Facebook.132
Although Facebook does not block false speech, it does make
certain categories of false speech more difficult to find and
points users toward other presumably more accurate articles about a
topic.133 Moreover, sites posting false
speech often violate other Facebook rules (e.g., rules against
spam, hate speech, or fake accounts) and are
suppressed.134 Yet Facebook product manager
Tessa Lyons says “we don’t want to be and are not the
arbiters of truth.”135
Yet Facebook has delegated the task of determining the factual
truth of contested content to a network of third-party
fact-checkers.136 While this allows Facebook to
avoid the difficult and politically fraught work of distinguishing
fact from fiction, Facebook is still held responsible for its
selection of fact-checkers and the impact of their decisions. In
September, the Weekly Standard, the sole conservative
organization admitted to Facebook’s fact-checking program,
deemed a ThinkProgress article, or at least its headline, false,
limiting its distribution. ThinkProgress took umbrage with the
decision and criticized Facebook for granting a conservative
publication the ability to downrank its content.137
Facebook appears to want to let a thousand flowers bloom on its
platform, yet it employs fact-checking gardeners that cut the false
ones. The public values truth, and we hope that conspiracy theories
and obvious falsehoods are bad for business. On the other hand, a
tech company deciding between the competing truths offered by blue
and red speakers invites political attacks against their platform
and, over the long-term, sows doubt about the fairness of its
content-moderation policies. Tech companies may sanction speech in
circumstances where government must remain passive. Yet that
empowerment has its own problems, not least of which is deciding
between contending armies in an age of cultural wars.
Many nations have undertaken regulation of fake news
recently.138 That such illiberal countries
as Belarus, China, Cameroon, or Russia (among others) would impose
government restrictions on posting or spreading misinformation may
not surprise anyone.139 But European nations are more
open to actively regulating speech than the United States. In
November 2018, France gave authorities the power to “remove
fake content spread via social media and even block the sites that
publish it.”140 The European Commission has
issued an initial report on disinformation that will be followed by
a process of oversight and evaluation of online
speech.141 For now, the commission is
supporting principles and policies that would be enacted by
stakeholders including the news media and online
companies.142 Does such nudging of private
actors constitute political pressure to suppress speech? If
disinformation and fake news remain a problem, would the commission
directly manage online speech or encourage national governments to
take stronger measures to suppress such speech?
The United States regulates speech less than Europe does.
Perhaps the European examples about regulating disinformation are
not relevant for this nation.143
Yet the debate over fake news has lasted only a couple of years.
Little has been said during that debate about the limits of
government power over online speech; much has been said about the
dangers to democracy of permitting fake news. Should future
national elections turn out badly, the United States might be
tempted to take a more European attitude and approach to online
We should thus keep in mind that the case for public as opposed
to private regulation of fake news online is weak. Fake news has no
fixed meaning, and regulations would be unconstitutionally vague.
The public values truth, but the search for truth in the United
States must abide by the First Amendment, and the courts have held
that false speech—the whole of which fake news is a
part—also has the protection of the First Amendment. Were
this not true, the combination of vagueness and politics in a
polarized age would mean virtually anything “the other
side” said would be regulated as fake news. But fake news
might not be the most likely reason for suppressing online
Hate speech may be defined as “offensive words, about or
directed toward historically victimized
groups.”144 That definition seems clear
enough. But consider the case of The Bell Curve, a 1994
book by Charles Murray and Richard Herrnstein. Among other things,
the authors state that the average IQ score of African Americans is
one standard deviation below the average score of the population.
Many also thought the book argued that nature was far more
important than nurture in determining the IQ of individuals and
groups, a claim that suggested social reforms would have little
effect on individual and group outcomes.145
The Bell Curve offended many people; “historically
victimized groups” might well have taken offense. Was The
Bell Curve hate speech? If not, where should elected officials
draw the line between permitted and prohibited speech?
The Supreme Court has resisted drawing such lines. Even efforts
to legislate more common-sense bans on group invective have failed;
the court has consistently invalidated laws containing terms such
as “contemptuous,” “insulting,”
“abusive,” and “outrageous.”146
The U.S. government lacks the power to prohibit “hate
Yet many nations regulate or prohibit speech offensive to
protected groups. They limit freedom of speech to advance other
values such as equal dignity. This balancing of values was first
developed in Germany and has spread to other jurisdictions in the
post-World War II era.147 In Germany, the law punishes
“incitement of the people,” which is understood as
spurring hatred of protected groups, demanding violent or arbitrary
measures against them, or attacking their human dignity. Those
convicted of incitement may be jailed for up to five
years.148 The United Kingdom also
criminalizes the expression of racial hatred.149
In two recent cases, a hate speech conviction led to
The United States has debated regulating hate speech for nearly
a century.151 Legal scholar James Weinstein
summarizes the outcome of this debate: “The United States is
an outlier in the strong protection afforded some of the most
noxious forms of extreme speech imaginable.”152
The Supreme Court precludes government from regulating speech
because of the message of content-based regulation it conveys. For
the court, the worst content-based regulation is “viewpoint
discrimination,” which is restrictions based on “the
specific motivating ideology or the opinion or perspective of the
speaker.”153 This constraint on political
power extends to highly offensive speech, which implies, Weinstein
remarks, “a complete suspension of civility norms within the
realm of public discourse.”154
Government may regulate some speech that is outside public
discourse: all unprotected speech involves government activities or
The Supreme Court has applied this general framework to protect
speech hostile to racial minorities. In their decision in
R.A.V. v. City of St. Paul, the court dealt with a
Minneapolis ordinance punishing speech that “one knows or has
reasonable grounds to know arouses anger, alarm or resentment in
others on the basis of race, color, creed, religion or
gender.”156 A lower court ruled that the
ordinance reached protected as well as unprotected speech and thus
was unconstitutionally overbroad. The same court interpreted the
ordinance to apply only to “fighting words,” which have
been considered outside the protections of the First Amendment.
Most of the Supreme Court went further, holding that Minneapolis
had engaged in viewpoint discrimination by punishing some but not
all “fighting words,” a distinction based on the
ideological content of some speech.
In theory, it is possible for the courts to uphold viewpoint
discrimination. Such distinctions must pass the strict scrutiny
test discussed earlier. To do so, the Minneapolis regulation would
have needed to be narrowly drawn to achieve a compelling government
interest. The court recognized the importance of protecting
minorities. Yet the government had other means to achieve that end,
means that were neutral toward the content of the
speech.157 Most experts assume R.A.V.
v. City of St. Paul precludes government suppression of hate
speech. Accordingly, hate speech on social media lies beyond
In contrast to the government, social media managers may
regulate speech by users that is hostile to some groups. Facebook
does so extensively. Facebook defines hate speech as
“anything that directly attacks people based on what are
known as their ‘protected characteristics’—race,
ethnicity, national origin, religious affiliation, sexual
orientation, sex, gender, gender identity, or serious disability or
disease.”159 Facebook is opposed “to
hate speech in all its forms”; it is not allowed on their
platform as a matter of policy.160
Hate speech is forbidden on Facebook because it causes harm by
creating “an environment of intimidation and exclusion and in
some cases may have dangerous offline
implications.”161 In June 2017, Richard Allan,
vice president for public policy at Facebook, said: “Over the
last two months, on average, we deleted around 66,000 posts
reported as hate speech per week—that’s around 288,000
posts a month globally.”162
However, at that time, Facebook had over two billion active
users.163 The number of removed hate
speech posts, though very large, is relatively trivial.
Other major platforms have policies that protect people with a
similar list of characteristics from hostile speech. Google has a
general policy against “incitement to hatred” of a list
of groups.164 YouTube, which is owned by
Google, does not permit hate speech directed toward seven
groups.165 This policy led to videos by
Alex Jones being removed.166
Twitter also has a similar policy against hate
In sum, the First Amendment does not permit government to censor
speech to prevent harms to the public apart from known exceptions
such as direct incitement to violence. The government may not
censor fake news or hate speech. Private regulators are doing what
government officials may not do: regulating and suppressing speech
believed to cause harm to citizens generally and protected groups
specifically. Private action thus weakens the case for moving the
United States toward a more European approach to fake news and hate
But such private action presents a mixed picture for supporters
of robust protections for speech. The platforms offer less
protection for speech than the government does. Social media
managers discriminate among speakers according to the content of
their speech and the viewpoints expressed. Tech companies have in
part applied to speech the proportionality test long-recognized in
Europe and rejected in this country. Private content governance of
social media poses a quandary, particularly for libertarians and
anyone who recognizes that private property implies a strong right
for social media managers to control what happens on their internet
platforms without government interference. It seems likely that
social media managers choose to limit speech in the short term to
fulfill their larger goal of building a business for the long term.
They may believe that excluding extreme speech is required to
sustain and increase the number of users on their platform.
Moreover, we should ask whether these efforts regarding hate
speech (along with private suppression of Russian speech, terrorist
incitement, or fake news) is truly a private decision and not state
action. If Facebook or other platforms remove content to avoid
government regulation, is such suppression state action or a hybrid
of private choice determined by public threats and offers?
We began with Cloudflare CEO Matthew Prince’s concern
about legitimate governance of speech on the internet.
Prince’s desire to bring government into online speech
controversies is understandable but misplaced. American history and
political culture assign priority to the private in governing
speech online and particularly on social media. The arguments
advanced for a greater scope of government power do not stand up.
Granting such power would gravely threaten free speech and the
independence of the private sector. We have seen that these tech
companies are grappling with many of the problems cited by those
calling for public action. The companies are technically
sophisticated and thus far more capable of dealing with these
issues. Of course, the efforts of the companies may warrant
scrutiny and criticisms, now and in the future. But at the moment,
a reasonable person can see promise in their efforts, particularly
in contrast to the likely dangers posed by government
Government officials may attempt directly or obliquely to compel
tech companies to suppress disfavored speech. The victims of such
public-private censorship would have little recourse apart from
political struggle. The tech companies, which rank among
America’s most innovative and valuable firms, would then be
drawn into the swamp of a polarized and polarizing politics. To
avoid politicizing tech, it is vital that private content
moderators be able to ignore explicit or implicit threats to their
independence from government officials.
It is Facebook, Medium, and Pinterest—not Congress or
President Trump—that have a presumption of legitimacy to
remove the speech of StormFront and similar websites. These firms
need to nurture their legitimacy to moderate content. The companies
may have to fend off government officials eager to suppress speech
in the name of the “public good.” The leaders of these
businesses may regret being called to meet this challenge with all
its political and social dangers and complexities. But this task
cannot be avoided. No one else can or should do the job.
1. Remarks at presentation
at the Cato Institute, November 28, 2017. See also Matthew Prince,
“Why We Terminated Daily Stormer,” Cloudflare
(blog), August 16, 2017. “Law enforcement, legislators, and
courts have the political legitimacy and predictability to make
decisions on what content should be restricted. Companies should
not.” Thanks to Alissa Starzak for the reference.
2. Social media firms are
often obligated to follow laws in nations where they operate. In
the future, such laws may create enforceable transnational
obligations. For now, however, the debate concerns national
audiences and policies. See David R. Johnson and David G. Post,
“Law and Borders: The Rise of Law in Cyberspace,”
Stanford Law Review 48, no. 5 (1996): 1367.
3. Tom Standage,
Writing on the Wall: Social Media—The First 2,000
Years (New York: Bloomsbury, 2013), p. 8. See also Sheryl
Sandberg’s definition in testimony before the Senate
Intelligence Committee: “Social media enables you to share
what you want to share, when you want to share it, without asking
permission from anyone, and that’s how we meet our mission,
which is giving people a voice.” Sheryl Sandberg, Facebook
chief operating officer, and Jack Dorsey, Twitter chief executive
officer, Foreign Influence Operations’ Use of Social
Media Platforms, Testimony before the Senate Committee on
Intelligence, 115th Cong., 2nd sess., September 5, 2018.
4. Standage, p. 13.
5. J. A. Obar and S.
Wildman, “Social Media Definition and the Governance
Challenge: An Introduction to the Special Issue,”
Telecommunications Policy 39, no. 9 (2015): 746.
6. “The backbone of
the social media service is the user profile… . The type of
identifying information requested, as well as the options for
identifying oneself vary considerably from service to service, but
often include the option of creating a username, providing contact
information and uploading a picture. The reason the profile serves
this backbone function is to enable social network connections
between user accounts. Without identifying information, finding and
connecting to others would be a challenge.” Obar and Wildman,
7. Commercial speech plays
a small part in these policy matters. Advertising, as will be
shown, does matter. However, the speech carried by ads is political
and not commercial. See the discussion of Russian
“meddling” in the 2016 U.S. election.
8. Red Lion
Broadcasting Co. v. Federal Communications Commission, 395
U.S. 367 (1969).
9. See the website for the
Heritage Guide to the Constitution for a concise discussion. Eugene
Volokh, “Freedom of Speech and of the Press,” Heritage
Guide to the Constitution (website).
10. Sorrell v. IMS
Health Inc., 564 U.S. 552 (2011); Martin H. Redish,
“Commercial Speech and the Values of Free
Expression,” Cato Institute Policy Analysis no. 813, June
11. United States v.
Carolene Products Co., 304 U.S. 144 (1938) 152:
“Regulatory legislation affecting ordinary commercial
transactions is not to be pronounced unconstitutional unless, in
the light of the facts made known or generally assumed, it is of
such a character as to preclude the assumption that it rests upon
some rational basis within the knowledge and experience of the
12. Other reasons might
counsel not regulating speech on social media. The costs of such
regulation might outweigh the benefits to society. Here I examine
only rights that might preclude or weigh heavily against
13. See the extended
discussion in Daphne Keller, “Who Do You Sue? State and
Platform Hybrid Power over Online Speech,” Aegis Series Paper
no. 1902, Hoover Institution, Stanford, 2019, pp. 17-21.
14. Miami Herald Pub.
Co. v. Tornillo, 418 U.S. 241 (1974), 258.
15. Monica Bickert, head
of global policy management at Facebook, writes: “First,
[social media] generally do not create or choose the content shared
on their platform; instead, they provide the virtual space for
others to speak.” Lee Bollinger and Geoffrey Stone, eds.,
“Defining the Boundaries of Free Speech on Social
Media,” The Free Speech Century (New York: Oxford
University Press, 2018), p. 254. The important word here is
“generally.” Relatively speaking, very little content
16. Eugene Volokh and
Donald M. Falk, “Google: First Amendment Protection for
Search Engine Search Results,” Journal of Law, Economics,
and Policy 8, no. 8.4 (2012): 886-88.
17. See, for example,
their CEO’s discussion of content moderation, which takes a
stand “against polarization and extremism” while
affirming other commitments. Mark Zuckerberg, “A Blueprint
for Content Governance and Enforcement,” Facebook, November
18. Protection for Private
Blocking and Screening of Offensive Material, 47 U.S.C. § 230.
19. Batzel v. Smith, 333 F.3d 1018
(Court of Appeals, 9th Circuit 2003), 8443; and Electronic Frontier
Foundation (website), “CDA 230: Legislative
20. Batzel v.
21. Batzel v.
23. Perry Education
Association v. Perry Local Educators’ Association, 460
U.S. 37 (1983).
24. The courts are first
among equals in the United States on these matters. “The
First Amendment, as interpreted by the courts, provides an anchor
for freedom of the press and thus accentuates the difference
between publishing and the electronic domain. Because of the unique
power of the American courts, the issue in the United States
unfolds largely in judicial decisions.” Ithiel de Sola Pool,
Technologies of Freedom (Cambridge: Harvard University
Press, 1983), p. 8.
25. Marsh v.
Alabama, 326 U.S. 501 (1946), 506, 509.
26. Food Employees v.
Logan Valley Plaza Inc., 391 U.S. 308 (1968), 324.
27. Lloyd Corp. v.
Tanner, 407 U.S. 551 (1972).
28. Hudgens v.
NLRB, 424 U.S. 507 (1976), 513.
29. Pruneyard Shopping
Ctr. v. Robins, 447 U.S. 74 (1980).
30. Pruneyard Shopping
Ctr. v. Robins, 81-82.
31. Dahlia Lithwick,
“Why Can Shopping Malls Limit Free Speech?,” Slate,
March 10, 2003.
32. Communications Act of
1934, 47 U.S.C. § 151, Pub. L. No. 73-416.
33. Thomas Winslow
Hazlett, The Political Spectrum: The Tumultuous Liberation of
Wireless Technology, from Herbert Hoover to the Smartphone
(New Haven: Yale University Press, 2017), p. 146.
34. The monopoly argument
is not limited to one part of the political spectrum. The head of
CNN called for investigation of “these monopolies that are
Google and Facebook.” See Stewart Clarke, “CNN Boss
Jeff Zucker Calls on Regulators to Probe Google, Facebook,”
Variety, February 26, 2018; Tim Wu, The Curse of
Bigness: Antitrust in the New Gilded Age (New Yorker: Columbia
Global Reports, 2018).
35. Trump’s campaign
manager Brad Parscale has written, “Google claims to value
free expression and a free and open internet, but there is
overwhelming evidence that the Big Tech giant wants the internet to
be free and open only to political and social ideas of which it
approves.” Brad Parscale, “Trump Is Right: More than
Facebook and Twitter, Google Threatens Democracy, Online
Freedom,” USA Today, September 10, 2018.
Parscale’s article offers several examples of putative bias
against conservatives. During the Republican primaries in 2016,
Facebook employees admitted they “routinely suppressed news
stories of interest to conservative readers from the social
network’s influential ‘trending’ news
section.” Facebook denied the charge. See Michael Nunez,
“Former Facebook Workers: We Routinely Suppressed
Conservative News,” Gizmodo, May 9, 2016; see also Peter van
Buren, “Extend the First Amendment to Twitter and Google
Now,” The American Conservative, November 7, 2017.
This view appears to be spreading on the right; see Michael M.
Grynbaum and John Herrman, “New Foils for the Right: Google
and Facebook,” New York Times, March 6, 2018.
36. Laura Stevens, Tripp
Mickle, and Jack Nicas, “Tech Giants Power to New
Heights,” Wall Street Journal, February 2, 2018.
37. David S. Evans and
Richard Schmalensee, “Debunking the ‘Network Effects’
Bogeyman: Policymakers Need to March to the Evidence, Not to
Slogans,” Regulation 40
(Winter 2017-18): 36.
38. Evans and Schmalensee,
39. “There are some
important industries where ‘winner takes most’ may
apply. But even there, victory is likely to be more transient than
economists and pundits once thought. In social networking,
Friendster lost to MySpace, which lost to Facebook, and, while
Facebook seems entrenched, there are many other social networks
nipping at its heels.” David S. Evans, Matchmakers: The
New Economics of Multisided Platforms (Cambridge: Harvard
Business Review Press, 2016), Kindle edition.
40. Facebook “was
built on the power of network effects: You joined because everyone
else was joining. But network effects can be just as powerful in
driving people off a platform. Zuckerberg understands this
viscerally.” Nicholas Thompson and Fred Vogelstein,
“Inside the Two Years That Shook Facebook—and the
World,” Wired, February 12, 2018.
41. Amy Mitchell, Elisa
Shearer, Jeffrey Gottfried, and Michael Barthel, “How
Americans Get Their News,” Pew Research Center, July 7,
42. Daniel Trotta,
“Shunned by Corporations, U.S. Gun Entrepreneurs Launch
Start-Ups,” Reuters, May 6, 2018.
43. Hazlett, The
Political Spectrum, pp. 91-92.
44. Bruce M. Owen, Jack H.
Beebe, and Willard G. Manning, Television Economics
(Lexington, MA: Lexington, 1974), p. 12, quoted in Hazlett, The
Political Spectrum, p. 92.
45. Hazlett, The
Political Spectrum, pp. 20-21.
46. Hazlett, The
Political Spectrum, pp. 14-16.
47. Hazlett, The
Political Spectrum, pp. 143-45.
48. For the public failure
see John Samples, “Broadcast Localism and the Lessons of the Fairness
Doctrine,” Cato Institute Policy Analysis no. 639, May
27, 2009, pp. 7-8; for Hazlett, see The Political
Spectrum, pp. 149-52.
49. “A surprising
number of people it seems dislike being exposed to the processes
endemic to democratic government. People profess a devotion to
democracy in the abstract but have little or no appreciation for
what a practicing democracy invariably brings with it… . People
do not wish to see uncertainty, conflicting options, long debate,
competing interests, confusion, bargaining, and compromised,
imperfect solutions.” John R. Hibbing and Elizabeth
Theiss-Morse, Congress as Public Enemy: Public Attitudes Toward
American Political Institutions (Cambridge: Cambridge
University Press, 1995), p. 147.
50. Cass R. Sunstein,
#Republic: Divided Democracy in the Age of Social Media
(Princeton: Princeton University Press, 2018), p. 157.
51. One way to deal with
this conflict between “our” aspirations and the
revealed preferences of individuals has been to insist that
revealed preferences actually contravene the true interests of
individuals, whereas our aspirations represent a truth to be
honored by government action. Equating revealed preferences with
false consciousness is one version of this argument. As we shall
see, Sunstein does not go far down the path toward imposing our
#Republic, p. 260.
#Republic, p. 43. He appeals to the spirit if not the
letter of constitutional doctrine: “On the speakers’
side, the public forum doctrine thus creates a right of general
access to heterogeneous citizens. On the listeners’ side, the
public forum creates not exactly a right but rather an opportunity,
if perhaps an unwelcome one: shared exposure to diverse speakers
with diverse views and complaints.” Sunstein,
#Republic, p. 38.
#Republic, p. 49.
#Republic, p. 71, 259. Former president Barack Obama has
said that “essentially we now have entirely different
realities that are being created with not just different opinions,
but now different facts. And this isn’t just by the way
Russian inspired bots and fake news. This is Fox News vs. The
New York Times editorial page. If you look at these different
sources of information, they do not describe the same thing. In
some cases, they don’t even talk about the same thing. And so
it is very difficult to figure out how democracy works over the
long term in those circumstances.” He added that government
should put “basic rules of the road in place that create
level playing fields.” Robby Soave, “5 Things Barack
Obama Said in His Weirdly Off-the-Record MIT Speech,” Hit
and Run (blog), Reason, February 26, 2018.
#Republic, p. 255.
#Republic, p. 67.
58. Cristian Vaccari,
“How Prevalent Are Filter Bubbles and Echo Chambers on Social
Media? Not as Much as Conventional Wisdom Has It,”
Cristian Vaccari (blog), February 13, 2018.
59. See the studies cited
in Michael A. Beam, Myiah J. Hutchens, and Jay D. Hmielowski,
“Facebook News and (de)Polarization: Reinforcing Spirals in
the 2016 US Election,” Information, Communication and
Society 21, no. 7 (July 3, 2018): 4.
60. Solomon Messing and
Sean J. Westwood, “Selective Exposure in the Age of Social
Media: Endorsements Trump Partisan Source Affiliation When
Selecting News Online,” Communication Research 41,
no. 8 (December 2014): 1042-63.
61. Pablo Barberá,
“How Social Media Reduces Mass Political Polarization.
Evidence from Germany, Spain, and the U.S.,” working paper,
New York University, 2014. Paper prepared for the 2015 APSA
62. Frederik J. Zuiderveen
Borgesius, Damian Trilling, Judith Möller, Balázs Bodó, Claes H. de
Vreese, and Natali Helberger, “Should We Worry about Filter
Bubbles?,” Internet Policy Review 5, no. 1 (2016):
63. Matthew Barnidge,
“Exposure to Political Disagreement in Social Media Versus
Face-to-Face and Anonymous Online Settings,” Political
Communication 34, no. 2 (2016): 302-21.
64. Levi Boxell, Matthew
Gentzkow, and Jesse M. Shapiro, “The Internet, Political Polarization, and the 2016
Election,” Cato Institute Research Brief in Economic
Policy no. 88, November 1, 2017.
65. Beam et al.,
“Facebook News and (de)Polarization,” 1.
66. Cristian Vaccari, “How Prevalent Are
Filter Bubbles and Echo Chambers on Social Media? Not as Much as
Conventional Wisdom Has It,” Cristian Vaccari
(blog), February 13, 2018.
67. Elizabeth Dubois and
Grant Blank, “The Echo Chamber Is Overstated: The Moderating
Effect of Political Interest and Diverse Media,”
Information, Communication and Society 21, no. 5 (2018):
68. Richard Fletcher and
Rasmus Kleis Nielsen, “Are People Incidentally Exposed to
News on Social Media? A Comparative Analysis,” New Media
and Society 20, no. 7 (July 2018): 2450-68.
#Republic, p. 262.
#Republic, p. 260; see also p. 158.
71. Owen Fiss, “Free
Speech and Social Structure,” Iowa Law Review 71
72. Owen Fiss, The
Irony of Free Speech (Cambridge, MA: Harvard University Press,
#Republic, p. 260.
74. Sunstein has proposed
“nudging” people to make better decisions by altering
their choice architecture; see Richard H. Thaler and Cass R.
Sunstein, Nudge: Improving Decisions about Health, Wealth, and
Happiness (New Haven: Yale University Press, 2008).
#Republic, p. 215.
#Republic, p. 226. “I certainly do not suggest or
believe that government should require anything of this kind (i.e.,
mandatory linking to opposing views). Some constitutional questions
are hard, but this one is easy: any such requirements would violate
the First Amendment.” Sunstein, p. 231.
#Republic, pp. 84-85, 221.
78. Amos A. Jordan et al.,
American National Security (Baltimore: Johns Hopkins
University Press, 2011), pp. 3-4.
79. J. M. Berger,
“The Difference between a Killer and a Terrorist,”
The Atlantic, April 26, 2018.
80. Kathleen Ann Ruane,
“The Advocacy of Terrorism on the Internet: Freedom of Speech
Issues and the Material Support Statutes,” Congressional
Research Service, September 8, 2016, p. 1.
81. Brandenburg v.
Ohio, summarized in Ruane, “The Advocacy of Terrorism on
the Internet,” p. 5.
82. Brandenburg v.
Ohio, 395 U.S. (1969): 448; and Ruane, “The Advocacy of
Terrorism on the Internet,” p. 4.
83. Eric Posner,
“ISIS Gives Us No Choice but to Consider Limits on
Speech,” Slate, December 15, 2015.
84. David G. Post,
“Protecting the First Amendment in the Internet Age,”
Washington Post, December 21, 2015.
85. See Pennie v.
Twitter Inc., 2017 WL 5992143 (N.D. Cal. Dec. 4, 2017);
Force v. Facebook Inc., 2018 WL 472807 (E.D.N.Y. Jan. 18,
2018); Crosby v. Twitter Inc., 303 F. Supp. 3d 564 (E.D.
Mich. April 2, 2018); Gonzalez v. Google Inc., 2018 WL
3872781 (N.D. Cal. Aug. 15, 2018); Cain v. Twitter Inc.,
2018 WL 4657275 (N.D. Cal. Sept. 24, 2018); Cohen v. Facebook
Inc., 2017 WL 2192621 (E.D.N.Y. May 18, 2017); and Fields
v. Twitter Inc., 2018 WL 626800 (9th Cir. Jan. 31, 2018).
86. Fields v. Twitter
Inc., 2018 WL 626800 (9th Cir. Jan. 31, 2018).
87. Zann Isacson,
“Combating Terrorism Online: Possible Actors and Their
Roles,” Lawfare, September 2, 2018; Matt Egan,
“Does Twitter Have a Terrorism Problem?,” Fox Business
(website), October 9, 2013.
88. “Violent or
Graphic Content Policies,” YouTube Help.
Individuals and Organizations,” Facebook Community
90. “The Twitter
Rules,” Twitter Help Center.
91. Mark Zuckerberg,
“Preparing for Elections,” Facebook, September 13,
92. Ithiel de Sola Pool
believed national security issues would become more important in
the electronics age: “Censorship is often imposed for reasons
of national security, cultural protection, and trade advantage.
These issues, which have not been central in past First Amendment
controversies, are likely to be of growing importance in the
electronic era.” Ithiel de Sola Pool, Technologies of
Freedom (Cambridge, MA: Harvard University Press, 1983), p.
93. Lamont v.
Postmaster General, 381 U.S. 301 (1965).
94. Lamont v.
Postmaster General, 307.
95. 11 CFR 110.20(f).
96. FEC.gov, FEC Record:
Outreach, “Foreign Nationals.”
“Foreign Nationals,” citing Bluman v. FEC, 800
F. Supp. 2d 281, 290 (D.D.C. 2011), affirmed, 132 Supreme Court
Nationals Brochure,” Federal Election Commission, July
99. 22 U.S.C. § 611. The
Mueller indictment notes that this disclosure informs “the
people of the United States … of the source of information and
the identity of persons attempting to influence U.S. public
opinion, policy, and law.” This information in turn allows
Americans to “evaluate the statements and activities of such
persons in light of their function as foreign agents.” Indictment at 11, U.S. v. Viktor
Borisovich Netyksho et al., Case 1:18-cr-00032-DLF (D.D.C.
filed Feb. 16, 2018).
100. “General FARA
Frequently Asked Questions,” Department of Justice, August
101. Lobbying on behalf of
commercial interests makes up a significant part of foreign
lobbying of the U.S. government, see Holly Brasher, Vital
Statistics on Interest Groups and Lobbying (Thousand Oaks, CA:
SAGE Publications, 2014), pp. 136-44.
102. Jack Stubbs and
Ginger Gibson, “Russia’s RT America Registers as
‘Foreign Agent’ in U.S.,” Reuters, November 13,
2017; James Kirchik, “Why Russia’s RT Should Register
as an Agent of a Foreign Government,” Brookings
(blog), September 22, 2017.
103. RT claims to have
eight million weekly U.S. viewers, though the real numbers are
likely far smaller. See Amanda Erickson, “If Russia Today Is
Moscow’s Propaganda Arm, It’s Not Very Good at Its
Job,” Washington Post, January 12, 2017.
104. The Russian ads would
have still been illegal even if the funder had been disclosed.
105. Scott Shane,
“How Unwitting Americans Encountered Russian Operatives
Online,” New York Times, February 19, 2018.
106. Brendan Nyhan,
“Fake News and Bots May Be Worrisome, but Their Political
Power Is Overblown,” New York Times, February 19,
107. Byron York, “A
Non-alarmist Reading of the Mueller Russia Indictment,”
Washington Examiner, February 18, 2018.
108. Ross Douthat,
“The Trolling of the American Mind,” New York
Times, February 21, 2018.
109. Mark Zuckerberg,
“Preparing for Elections,” Facebook, September 13,
2018. Subsequent quotations from Zuckerberg will refer to this
110. “What Super
Pacs, Non-profits, and Other Groups Spending Outside Money Must
Disclose about the Source and Use of Their Funds,”
111. Speaking during a
2017 congressional investigation of Russian efforts during the 2016
election, Sen. Diane Feinstein (D-CA) said to tech leaders:
“You’ve created these platforms, and now they are being
misused, and you have to be the ones to do something about it. Or
we will.” Byron Tau, Georgia Wells, and Deepa Seetharaman,
“Lawmakers Warn Tech Executives More Regulation May Be Coming
for Social Media,” Wall Street Journal, November 1,
“Preparing for Elections.”
113. Stephen A. Siegel,
“The Origin of the Compelling State Interest Test and Strict
Scrutiny,” American Journal of Legal History 48, no.
4 (2006): 355-407.
114. We treat here
“misinformation” or “disinformation” as a
subset of fake news and more generally as a kind of false speech.
Misinformation may be speech that is intentionally false. For First
Amendment purposes, it would be difficult to distinguish
unintentionally false speech from intentionally false speech. If
that distinction cannot be made, the analysis that applies to false
speech also includes misinformation, keeping in mind the focus here
will be on public values.
115. Nadine Strossen,
HATE: Why We Should Resist It with Free Speech, Not
Censorship (New York: Oxford University Press, 2018), pp.
116. Brooke Borel,
“Fact-Checking Won’t Save Us from Fake News,”
FiveThirtyEight, January 4, 2017.
117. Bertin Martens, Luis Aguiar, Estrella
Gomez-Herrera, and Frank Mueller-Langer, “The Digital
Transformation of News Media and the Rise of Disinformation and
Fake News—An Economic Perspective,” Digital Economy
Working Paper 2018-02, JRC Technical Reports, pp. 8-11.
118. See Alvarez, p. 4,
quoting Ashcroft, “The First Amendment means that government
has no power to restrict expression because of its message, its
ideas, its subject matter, or its content.” United States
v. Alvarez, 567 U.S. 709 (2012); and Ashcroft v. American
Civil Liberties Union, 535 U.S. 564, 573 (2002).
119. Alvarez, p.
120. Alvarez, p.
121. Alvarez, p.
122. Alvarez, p.
123. See New York
Times Co. v. Sullivan, 376 U.S. 254 (1964), and
124. United States v.
Alvarez, 567 U.S. 709 (2012).
125. New York Times
Co. v. Sullivan, 376 U.S. 254 (1964).
126. Protection for Private Blocking and
Screening of Offensive Material, 47 U.S. Code § 230.
127. David French,
“A Better Way to Ban Alex Jones,” New York
Times, August 7, 2018.
128. Michael Barthel, Amy
Mitchell, and Jesse Holcomb, “Many Americans Believe Fake
News Is Sowing Confusion,” Pew Research Center, December 15,
129. Bertin Martens et
al., “The Digital Transformation of News Media and the Rise
of Disinformation and Fake News—An Economic
Perspective,” Digital Economy Working Paper 2018-02, JRC
Technical Reports, pp. 40-47.
130. George A. Akerlof,
“The Market for ‘Lemons’: Quality Uncertainty and
the Market Mechanism,” Quarterly Journal of
Economics 84, no. 3 (1970): 488-500.
131. Martens et al.,
“Digital Transformation,” pp. 51-52.
132. Richard Allan,
“Hard Questions: Where Do We Draw the Line on Free
Expression?,” Facebook Newsroom, August 9, 2018.
133. Allan, “Where
Do We Draw,” which states, “And rather than blocking
content for being untrue, we demote posts in the News Feed when
rated false by fact-checkers and also point people to accurate
articles on the same subject.”
134. Tessa Lyons, Facebook
product manager, “Hard Questions: What’s
Facebook’s Strategy for Stopping False News?,” Facebook
Newsroom, May 23, 2018.
135. Laura Hazard Owen,
“Facebook Is Paying Its Fact-Checking Partners Now (and
Giving Them a Lot More Work to Do),” Nieman Lab.
136. “How Is
Facebook Addressing False News through Third-Party
Fact-Checkers?,” Facebook Help Center, Facebook.
137. Casey Newton,
“A Partisan War over Fact-Checking Is Putting Pressure on
Facebook,” The Verge (website), September 12, 2018.
138. See the updated
database about fake news regulation throughout the world prepared
by the Poynter Institute. Daniel Funke, “A Guide to
Anti-misinformation Actions around the World,” Poynter
139. The reputations of
China, Russia, and Belarus are well known in this regard. Cameroon
is less infamous, but its problems are summarized by a recent
headline in The Guardian, “Cameroon Arrests
Opposition Leader Who Claims He Won 2018 Election.”
“Guide,” Poynter.org. See the entry for France.
141. European Union
(website), Publications Office of the EU, A Multidimensional
Approach to Disinformation: Report of the Independent High Level
Group on Fake News and Online Disinformation, March 30, 2018.
For the process going forward, see p. 33.
142. European Commission,
Multidimensional, pp. 35-38.
143. According to the
Poynter Institute, neither Congress nor the states has tried to
suppress fake news. The California legislature did pass a bill
setting up an advisory commission “to monitor information
posted and spread on social media.” The governor vetoed the
bill. See “Governor Brown Vetoes Fake News Advisory Group
Bill, Calls It ‘Not Necessary’,” CBS Sacramento
(website), September 27, 2018.
144. Samuel Walker,
Hate Speech: The History of an American Controversy
(Lincoln: University of Nebraska Press), p. 1.
145. The authors appear
somewhat skeptical about the effects of nature vs. nurture on IQ.
See Richard J. Herrnstein and Charles Murray, The Bell Curve:
Intelligence and Class Structure in American Life (New York:
Free Press, 1996), p. 131. For the average IQ claim, see pp.
HATE, p. 71.
147. Dieter Grimm,
“Freedom of Speech in a Globalized World,” in
Extreme Speech and Democracy, ed. Ivan Hare and James
Weinstein (New York: Oxford University Press, 2009), p. 13.
Strafgesetzbuch (StGB), § 130 Volksverhetzung, 1-2.
149. Public Order Act,
1986, Part III.
150. Britain First is a
“nationalistic, authoritarian, … nativist, ethnocentric
and xenophobic” group hostile to Muslim immigrants in the
United Kingdom. They are active online with significant
consequences for their leaders if not for British elections. The
leading and perhaps only scholarly study of the group is Chris
Allen, “Britain First: The ‘Frontline Resistance’
to the Islamification of Britain,” Political
Quarterly 85, no. 3 (July-September 2014): 354-61. See also
the report by the organization Hope not Hate, “Britain First:
Army of the Right,” November 2017. The leaders of Britain
First, Paul Golding and Jayda Fransen, were incarcerated for
distributing leaflets and posting online videos that reflected
their extreme antipathy to Muslims. Fransen received a 36-week
sentence and Golding an 18-week sentence. Kevin Rawlinson,
“Britain First Leaders Jailed over Anti-Muslim Hate
Crimes,” The Guardian, March 7, 2018.
151. For the origins of
the debate, see Walker, Hate Speech, pp. 17-37.
152. James Weinstein,
“An Overview of American Free Speech Doctrine and Its
Application to Extreme Speech,” in Extreme Speech and
Democracy, eds. Ivan Hare and James Weinstein (New York:
Oxford University Press, 2009), p. 81.
“Overview,” pp. 81-82, quoting Rosenberger v.
Rector and Visitors of University of Virginia, 515 U.S. 819,
“Overview,” p. 82.
155. Weinstein gives
examples of such settings: government workplace, state school
classroom, and the courtroom. See “Overview,” p.
156. Weinstein, p. 85, n.
157. R.A.V. v. City of
St. Paul, 505 U.S. 377 (1992). In the past, the Supreme Court
upheld a group libel law, Beauharnais v. Illinois 343 U.S.
250 (1952). It is generally assumed that while the court has not
formally overruled the precedent, it would not validate a group
libel law today. See Weinstein, “Overview,” p. 88, and
158. This assumes hate
speech does not fall into a category of speech recognized as
unprotected by the First Amendment (e.g., a “true
159. Richard Allan,
“Hard Questions: Who Should Decide What Is Hate Speech in an
Online Global Community?,” Facebook Newsroom, June 27, 2017.
Allan is currently Facebook’s vice president for policy.
160. Allan, “Who
161. Richard Allan,
“Hard Questions: Where Do We Draw the Line on Free
Expression?,” Facebook Newsroom, August 9, 2018.
162. Allan, “Who
163. Maddy Osman,
“28 Powerful Facebook Stats Your Brand Can’t Ignore in
2018,” Sprout Social (website).
Content,” AdSense Help, Google Support; “Hate Speech |
Inappropriate Content | Restricted Content,” Developer Policy
Center, Google Play.
165. “Violent or
Graphic Content Policies,” YouTube Help, Google Support.
166. Catherine Shu,
“YouTube Punishes Alex Jones’ Channel for Breaking
Policies against Hate Speech and Child Endangerment,”
Techcrunch, July 2018.
Conduct Policy,” Twitter.