My thanks to the chair, ranking member, and all members of this subcommittee for the opportunity to speak to you today.

As a firm believer in the principle of comparative advantage, I don’t intend to delve too deeply into the technical details of automated content filtering, which my copanelists are far better suited than I to address. Instead I want to focus on legal and policy considerations, and above all to urge Congress to resist the temptation to intervene in the highly complex — and admittedly highly imperfect — processes by which private online platforms seek to moderate both content related to terrorism and “hateful” or otherwise objectionable speech more broadly. (My colleague at the Cato Institute, John Samples, recently published a policy paper dealing still more broadly with issues surrounding regulation of content moderation policies, which I can enthusiastically recommend to the committee’s attention.1)

The major social media platforms all engage, to varying degrees, in extensive monitoring of user-posted content via, a combination of human and automated review, with the aim of restricting a wide array of speech those platforms deem objectionable, typically including nudity, individual harassment, and — more germane to our subject today — the promotion of extremist violence and, more broadly, hateful speech directed at specific groups on the basis of race, gender, religion, or sexuality. In response to public criticism, these platforms have in recent years taken steps to crack down more aggressively on hateful and extremist speech, investing in larger teams of human moderators and more sophisticated algorithmic tools designed to automatically flag such content.2

Elected officials and users of these platforms are often dissatisfied with these efforts — both with the speed and efficacy of content removal and the scope of individual platforms’ policies. Yet it is clear that all the major platforms’ policies go far further in restricting speech than would be permissible under our Constitution via state action. The First Amendment protects hate speech. The Supreme Court has ruled in favor of the constitutional right of American neo-Nazis to march in public brandishing swastikas3, and of a hate group to picket outside the funerals of veterans displaying incredibly vile homophobic and anti-military slogans.4

While direct threats and speech that is both intended and likely to incite “imminent” violence fall outside the ambit of the First Amendment, Supreme Court precedent distinguishes such speech from “the mere abstract teaching … of the moral propriety or even moral necessity for a resort to force and violence,”5 which remains protected. Unsurprisingly, in light of this case law, a recent Congressional Research Service report found that “laws that criminalize the dissemination of the pure advocacy of terrorism, without more, would likely be deemed unconstitutional.”6

Happily — at least, as far as most users of social media are concerned — the First Amendment does not bind private firms like YouTube, Twitter, or Facebook, leaving them with a much freer hand to restrict offensive content that our Constitution forbids the law from reaching. The Supreme Court reaffirmed that principle just this month, in acase involving a public access cable channel in New York. Yet as the Court noted in that decision, this applies only when private determinations to restrict content are truly private. They may be subject to First Amendment challenge if the private entity in question is functioning as a “state actor” — which can occur “when the government compels the private entity to take a particular action” or “when the government acts jointly with the private entity.“7

Perversely, then, legislative efforts to compel more aggressive removal of hateful or extremist content risk producing the opposite of the intended result. Content moderation decisions that are clearly lawful as an exercise of purely private discretion could be recast as government censorship, opening the door to legal challenge. Should the courts determine that legislative mandates had rendered First Amendment standards applicable to online platforms, the ultimate result would almost certainly be more hateful and extremist speech on those platforms.

Bracketing legal considerations for the moment, it is also important to recognize that the ability of algorithmic tools to accurately identify hateful or extremist content is not as great as is commonly supposed. Last year, Facebook boasted that its automated filter detected 99.5 percent of the terrorist-related content the company removed before it was posted, with the remainder flagged by users.8 Many press reports subtly misconstrued this claim. The New York Times, for example, wrote that Facebook’s “A.I. found 99.5 percent of terrorist content on the site.”9 That, of course, is a very different proposition: Facebook’s claim concerned the ratio of content removed after being flagged as terror-related by automated tools versus human reporting, which should be unsurprising given that software can process vast amounts of content far more quickly than human brains. It is not the claim that software filters successfully detected 99.5 percent of all terror-related content uploaded to the site — which would be impossible since, by definition, content not detected by either mechanism is omitted from the calculus. Nor does it tell us much about the false-positive ratio: How much content was misidentified as terror-related, or how often such content appeared in the context of posts either reporting on or condemning terrorist activities.

There is ample reason to believe that such false positives impose genuine social cost. Algorithms may be able to determine that a post contains images of extremist content, but they are far less adept at reading contextual cues to determine whether the purpose of the post is to glorify violence, condemn it, or merely document it — something that may in certain cases even be ambiguous to a human observer. Journalists and human rights activists, for example, have complained that tech company crackdowns on violent extremist videos have inadvertently frustrated efforts to document human rights violations10, and erased evidence of war crimes in Syria.11

Just this month, a YouTube crackdown on white supremacist content resulted in the removal of a large number of historical videos posted by educational institutions, and by anti-racist activist groups dedicated to documenting and condemning hate speech.12

Of course, such errors are often reversed by human reviewers — at least when the groups affected have enough know-how and public prestige to compel a reconsideration. Government mandates, however, alter the calculus. As three United Nations special rapporteurs wrote, objecting to a proposal in the European Union to require automated filtering, the threat of legal penalties were “likely to incentivize platforms to err on the side of caution and remove content that is legitimate or lawful.”13 If the failure to filter to the government’s satisfaction risks stiff fines, any cost-benefit analysis for platforms will favor significant overfiltering: Better to pull down ten benign posts than risk leaving up one that might expose them to penalties. For precisely this reason, the EU proposal has been roundly condemned by human rights activists14 and fiercely opposed by a wide array of civil society groups.15

A recent high-profile case illustrates the challenges platforms face: The efforts by platforms to restrict circulation of video depicting the brutal mass shooting of worshippers at a mosque in Christchurch, New Zealand. Legal scholar Kate Klonick documented the efforts of Facebook’s content moderation team for The New Yorker16, while reporters Elizabeth Dwoskin and Craig Timberg wrote about the parallel struggles of YouTube’s team for The Washington Post17 — both accounts are illuminating and well worth reading.

Though both companies were subject to vigorous condemnation by elected officials for failing to limit the video quickly or comprehensively enough, the published accounts make clear this was not for want of trying. Teams of engineers and moderators at both platforms worked around the clock to stop the spread of the video, by increasingly aggressive means. Automated detection tools, however, were often frustrated by countermeasures employed by uploaders, who continuously modified the video until it could pass through the filters. This serves as a reminder that even if automated detection proves relatively effective at any given time, they are in a perennial arms race with determined humans probing for algorithmic blind spots.18 There was also the problem of users who had — perhaps misguidedly — uploaded parts of the video in order to condemn the savagery of the attack and evoke sympathy for the victims. Here, the platforms made a difficult real-time value judgment that, in this case, the balance of equities favored an aggressive posture: Categorical prohibition of the content regardless of context or intent, coupled with tight restrictions on searching and sharing of recently uploaded video.

Both the decisions the firms made and the speed and adequacy with which they implemented them in a difficult circumstance will be — and should be — subject to debate and criticism. But it would be a grave error to imagine that broad legislative mandates are likely to produce better results than such context-sensitive judgments, or that smart software will somehow obviate the need for a difficult and delicate balancing of competing values.

I thank the committee again for the opportunity to testify, and look forward to your questions.

Notes

1 John Samples, “Why the Government Should Not Regulate Content Moderation of Social Media” (Cato Institute) https://​www​.cato​.org/​p​u​b​l​i​c​a​t​i​o​n​s​/​p​o​l​i​c​y​-​a​n​a​l​y​s​i​s​/​w​h​y​g​o​v​e​r​n​m​e​n​t​-​s​h​o​u​l​d​-​n​o​t​-​r​e​g​u​l​a​t​e​-​c​o​n​t​e​n​t​-​m​o​d​e​r​a​t​i​o​n​-​s​o​c​i​a​l​-​m​e​d​i​a​#full

2 See, e.g., Kent Walker “Four steps we’re taking today to fight terrorism online” Google (June 18, 2017) https://www.blog.google/around-the-globe/google-europe/four-stepswere-taking-today-fight-online-terror/; Monika Bickert and Brian Fishman “Hard Questions: What Are We Doing to Stay Ahead of Terrorists?” Facebook (November 8, 2018) https://​news​room​.fb​.com/​n​e​w​s​/​2​0​1​8​/​1​1​/​s​t​a​y​i​n​g​-​a​h​e​a​d​-​o​f​-​t​e​r​r​o​r​ists/; “Terrorism and violent extremism policy” Twitter (March 2019) https://​help​.twit​ter​.com/​e​n​/​r​u​l​e​s​a​n​d​-​p​o​l​i​c​i​e​s​/​v​i​o​l​e​n​t​-​g​roups

3 National Socialist Party of America v. Village of Skokie, 432 U.S. 43 (1977)

4 Snyder v. Phelps, 562 U.S. 443 (2011)

5 U.S. v. Brandenburg, 395 U.S. 444 (1969)

6 Kathleen Anne Ruane, “The Advocacy of Terrorism on the Internet: Freedom of Speech Issues and the Material Support Statutes” Congressional Research Service Report T44646 (September 8, 2016) https://​fas​.org/​s​g​p​/​c​r​s​/​t​e​r​r​o​r​/​R​4​4​6​2​6.pdf

7 Manhattan Community Access Corp. v. Halleck, 17–1702 (2019)

8 Alex Schultz and Guy Rosen “Understanding the Facebook Community Standards Enforcement Report” https://​fbnews​roomus​.files​.word​press​.com/​2​0​1​8​/​0​5​/​u​n​d​e​r​s​t​a​n​d​i​n​g​_​t​h​e​_​c​o​m​m​u​n​i​t​y​_​s​t​a​n​d​a​r​d​s​_​e​n​f​o​r​c​e​m​e​n​t​_​r​e​p​o​r​t.pdf

9 Sheera Frenkel, “Facebook Says It Deleted 865 Million Posts, Mostly Spam” New York Times (May 15, 2018). https://​www​.nytimes​.com/​2​0​1​8​/​0​5​/​1​5​/​t​e​c​h​n​o​l​o​g​y​/​f​a​c​e​b​o​o​k​-​r​e​m​o​v​a​l​-​p​o​s​t​s​-​f​a​k​e​a​c​c​o​u​n​t​s​.html

10 Dia Kayyali and Raja Althaibani, “Vital Human Rights Evidence in Syria is Disappearing from YouTube” https://​blog​.wit​ness​.org/​2​0​1​7​/​0​8​/​v​i​t​a​l​-​h​u​m​a​n​-​r​i​g​h​t​s​e​v​i​d​e​n​c​e​-​s​y​r​i​a​-​d​i​s​a​p​p​e​a​r​i​n​g​-​y​o​u​tube/

11 Bernhard Warner, “Tech Companies Are Deleting Evidence of War Crimes,” The Atlantic, (May 8, 2019). https://​www​.the​at​lantic​.com/​i​d​e​a​s​/​a​r​c​h​i​v​e​/​2​0​1​9​/​0​5​/​f​a​c​e​b​o​o​k​a​l​g​o​r​i​t​h​ms-ar…

12 Elizabeth Dwoskin, “How YouTube Erased History in Its battle against White Supremacy,” Washington Post (June 13, 2019). https://​www​.wash​ing​ton​post​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​1​9​/​0​6​/​1​3​/​h​o​w​-​y​o​u​t​u​b​e​-​e​r​a​s​e​d​-​h​i​s​t​o​r​y​i​t​s​-​b​a​t​t​l​e​-​a​g​a​i​n​s​t​-​w​h​i​t​e​-​s​u​p​r​e​m​a​c​y​/​?​u​t​m​_​t​e​r​m​=​.​e​5​3​9​1​b​e​45aa2

13 David Kaye, Joseph Cannataci, and Fionnuala Ní Aoláin “Mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism” https://​spcomm​re​ports​.ohchr​.org/​T​M​R​e​s​u​l​t​s​B​a​s​e​/​D​o​w​n​L​o​a​d​P​u​b​l​i​c​C​o​m​m​u​n​i​c​a​t​i​o​n​F​i​l​e​?​g​I​d​=​24234

14 Faiza Patel, “EU ‘Terrorist Content’ Proposal Sets Dire Example for Free Speech Online” Just Security, https://​www​.just​se​cu​ri​ty​.org/​6​2​8​5​7​/​e​u​-​t​e​r​r​o​r​i​s​t​-​c​o​n​t​e​n​t​-​p​r​o​p​o​s​a​l​s​e​t​s​-​d​i​r​e​-​f​r​e​e​-​s​p​e​e​c​h​-​o​n​line/

15 “Letter to Ministers of Justice and Home Affairs on the Proposed Regulation on Terrorist Content Online,” https://cdt.org/files/2018/12/4‑Dec-2018-CDT-Joint-LetterTerrorist-Content-Regulation.pdf

16 Kate Klonick, “Inside the Team at Facebook That Dealt With the Christchurch Shooting,” The New Yorker (April 25, 2019), https://​www​.newyork​er​.com/​n​e​w​s​/​n​e​w​s​d​e​s​k​/​i​n​s​i​d​e​-​t​h​e​-​t​e​a​m​-​a​t​-​f​a​c​e​b​o​o​k​-​t​h​a​t​-​d​e​a​l​t​-​w​i​t​h​-​t​h​e​-​c​h​r​i​s​t​c​h​u​r​c​h​-​s​h​o​oting

17 Elizabeth Dwoskin and Craig Timberg “Inside YouTube’s Struggles to Shut Down Video of the New Zealand Shooting — and the Humans Who Outsmarted Its Systems,” Washington Post (March 18, 2019), https://​www​.wash​ing​ton​post​.com/​t​e​c​h​n​o​l​o​g​y​/​2​0​1​9​/​0​3​/​1​8​/​i​n​s​i​d​e​-​y​o​u​t​u​b​e​s​-​s​t​r​u​g​g​l​e​s​-​s​h​u​t​d​o​w​n​-​v​i​d​e​o​-​n​e​w​-​z​e​a​l​a​n​d​-​s​h​o​o​t​i​n​g​-​h​u​m​a​n​s​-​w​h​o​-​o​u​t​s​m​a​r​t​e​d​-​i​t​s​s​y​s​t​e​m​s​/​?​u​t​m​_​t​e​r​m​=​.​6​a​5​9​1​6​b​a26c1

18 See, e.g., Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran “Deceiving Google’s Perspective API Built for Detecting Toxic Comments,” Arvix (February 2017), https://​arx​iv​.org/​a​b​s​/​1​7​0​2​.​08138