Does Section 230 Enable Bad Behavior?

October 27, 2020 • Testimony

Committee on Commerce, Science and Transportation
United States Senate

Dear Chairman Wicker, Ranking Member Cantwell, and Members of the Committee:

My name is Will Duffield, I am a policy analyst with the Cato Institute’s Center for Representative Government. I would like to thank the Committee on Commerce, Science, and Transportation for convening this hearing on Section 230, on October 28, 2020, and for providing the opportunity to express my views regarding this topic.

Today you will have the opportunity to ask questions of Mark Zuckerberg, Sundar Pichai, and Jack Dorsey, the respective heads of Facebook, Google, and Twitter. These firms benefit tremendously from Section 230’s protections. However, they are far from its only beneficiaries. Section 230 protects an internet ecosystem home to hundreds of thousands of platforms and services. The purpose of this internet ecosystem as explicated in Section 230’s congressional findings is to “offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” 1 Whatever the failings of specific platforms, the contemporary internet continues to fulfill this promise.

Crucially, this expectation was made of “the internet” as a whole. Specific platforms have always been expected to filter speech they considered offensive or off‐​topic. Section 230(c)(2) explicitly shields platforms from lawsuits over their moderation decisions. However, because 230(c)(1) forecloses most platform liability for user speech, new platforms may always be established to host legal speech unwanted elsewhere. The resultant ecosystem of large, more restrictive platforms and smaller services with niche focuses or speech policies too liberal to maintain at scale provides a home for almost all speech. Some speakers may not have the billing they feel they deserve, but all can reach willing listeners. Even those banned from major platforms have greater reach than they might have before the advent of the internet – to some, this, not overbroad content moderation, is the real danger.

Many proposed amendments, such as last weeks’ “Protecting Americans from Dangerous Algorithms Act,” which would hold platforms liable for algorithmically processed extreme speech, would result in less speech, not more. Section 230(c)(1) protects a diverse array of websites, from Ravelry, a community platform for knitters, to Armslist, a classified‐​ads section for firearms. Without it, smaller sites like these simply would not be able to operate. While larger platform might be able to invest in algorithmic filtering or fight meritless lawsuits in court, smaller platforms don’t have the resources to fight off constant claims treating them as the speaker of their users’ speech. Likewise, without Section 230(c)(2), they would be overrun by off‐​topic submissions, spam, and scams. Even if well intentioned, or motivated by legitimate concerns about overbroad moderation, amending Section 230 is likely to make the internet less hospitable to speech.

Download the Testimony

About the Author