Social media icons

The Senate Commerce Committee recently held a hearing to discuss Section 230, commonly referred to as the “26 words that created the internet” as we know it today. There was some good discussion during the hearing, but many lawmakers were consumed with the belief that Section 230 endangers children and other users, or that it was the cause of biased moderation and jawboning… or both. 

But Section 230 is responsible for neither of those concerns, as I explained in my recent policy analysis on Section 230. In fact, repealing or limiting Section 230 will radically reduce both free expression and online safety tools. 

Unfortunately, lawmakers continue to fundamentally misunderstand the importance of Section 230. Sen. Marsha Blackburn used the hearing to announce a proposal to repeal Section 230 within her new TRUMP AMERICA AI Act. This provision is just one in a suite of some of the most speech-destructive policies to receive serious consideration, first among them being the repeal of Section 230. 

What is Section 230 

Section 230 gets a bad rap, but few people ever pause to consider what it actually does. Section 230 says that an interactive computer service, such as a website or social media, is not to be considered the publisher or speaker of content provided by another person. In other words, Section 230 says that websites and platforms are not to be held liable or responsible for the words of some third party posting on their website or platform. 

The reason for this is clear- if websites were directly responsible for everything posted by others, then there is no way a website could afford to allow users the freedom to post content. Comment sections, user reviews, social media and video-sharing platforms, various online marketplaces, live streaming services, communication applications, user-created encyclopedias, and more would all be at risk, as every piece of content hosted by a platform is a potential legal risk. For today’s internet, that’s billions of pieces of content every day, and each is a risk. 

Together with the US’s legal system, which makes it easy to litigate, platforms could not continue to exist in any form that resembles what they are today. News organizations could communicate their views to their followers. But they couldn’t allow any engagement or comments. Users could request that their comment or opinion be published like a letter to the editor, but that would be a tiny fraction compared to today. Maybe the larger platforms could shift to heavily moderated message boards that strictly limit interactions and immediately remove any content reported for nearly any reason by any heckler or troll. Small and new platforms or websites would simply not have the resources to survive such liability. The internet that empowered average users to communicate with each other would disappear. 

What Section 230’s Critics Get Wrong 

Despite Section 230’s essential role in creating the internet we enjoy today— vastly enriching and empowering millions of Americans in the process— many policymakers and some of the witnesses repeatedly argued that Section 230 had done great harm to the nation. Some called for drastic reductions in Section 230, while others, like Senator Blackburn, called for doing away with it entirely. They predicated their viewpoints on three highly flawed arguments: that Section 230 allowed companies to spread harmful content; that Section 230 allowed companies to be biased in their moderation; and that the algorithms and design features of a platform are not protected. 

Lack of Moderation 

A common argument is that platforms have not done enough to moderate away harmful content. They, therefore, should not enjoy the protections of Section 230. This argument sometimes covers categories of illicit content like drug sales or non-consensual intimate imagery. Other times, it covers speech that some may find objectionable but is not illegal, like gun sales, hate speech, misinformation, etc. 

For speech that is illegal, Section 230 does not remove all liability—it simply places that liability on the speaker of the content. For example, a user may make a defamatory statement that can be definitively proven to be wrong and the result of actual malice. In addition to a defamed individual using their own platform to rebut the false claims, the person who made the defamatory statement can be held liable. In addition, most social media companies have content policies to remove various types of illicit content, either because it is unlawful or just because the company doesn’t want that kind of speech on their platform. But as is the case with crimes in the physical world, criminals act in evasive and adversarial ways to avoid detection and punishment. Platforms are responsive to appropriate law enforcement requests when some illicit harm is being perpetrated by or on their users. 

Beyond illegal content, however, there are whole swaths of content that policymakers may wish companies would moderate but that are protected under our First Amendment. When a user shares “hate speech,” “misinformation,” speaks ill of the dead, shares content about perceived vices, or speaks in a multitude of ways that someone else may find offensive, that is protected speech. Some may blame Section 230 for allowing such “harmful” speech to exist on a platform, but the problem there isn’t Section 230. The problem policymakers face is that the First Amendment forbids them from compelling platforms to infringe on others’ expressive rights. 

Biased Moderation 

Another argument is that Section 230 has allowed platforms to be biased in their moderation. By protecting providing liability protection for engaging in content moderation, this argument asserts Section 230 has allowed platforms to consistently bias their moderation in favor of various left-wing viewpoints to the detriment of right-wing viewpoints. In the eyes of those who make this argument, such disparate moderation is then a form of censorship. 

But this gets censorship and free expression backwards. Content moderation and curation represents the editorial decisions of a platform about what kind of speech it wants to have on its platform and how it wants to organize that speech. A bookstore is allowed to choose which books to stock, remove, promote, or restrict because that is its editorial right. If a Christian bookstore chooses to not carry pornographic content, the bookstore has not censored anyone. It has exercised its own expressive rights by determining what speech it will carry. In the same way, while Section 230 protects platforms from ruinous liability that would arise from moderating user content, the First Amendment protects the rights of platforms to moderate and curate content to begin with. 

This argument also fails to account for the diverse reality of social media today. As the major social media platforms grew larger, they also systematically added more policies. These new policies often did penalize more right-wing speech than left speech, including on issues of great political importance such as COVID or gender identity. But as they increasingly expanded their policies in ways that inhibited certain viewpoints, those other viewpoints began searching for alternative ways to communicate. Thus, we saw the purchase of Twitter by Elon Musk and the growth of Rumble and Truth Social as a market reaction. Left-wing users responded to these and other changes by moving to Threads or Bluesky. The point is that Section 230 clearly protects a wide range of content moderation strategies and viewpoints. In fact, companies most likely to be harmed by limiting Section 230 are the many smaller, right-wing platforms that lack the resources to manage vastly increased moderation and liability costs. 

Design vs. Speech 

A final and somewhat overlapping argument is that social media platforms should not receive Section 230 for their product design. In this argument, platform algorithms, autoplay features, engagement tools such as likes and shares, etc., are not protected because they aren’t trying to regulate or hold companies liable for speech, but for their products. 

This argument completely misses the point that those features ultimately provide the speech of other users. Social media algorithms are the tools that platforms use to imperfectly try to curate third-party content. Autoplay, like and share features, or notifications let users see and engage with the content of other users. Furthermore, when opponents of the design of social media talk about their concerns, they inevitably mix their criticism of the features with the content that is being provided. For example, in this Senate hearing, Section 230 critic Matthew Bergman repeatedly spoke about design features of platforms, but at various times couldn’t avoid talking about the specific pieces of content that were so disturbing or harmful. 

In other words, critics of Section 230 are upset that social media design features are allowing potentially harmful content. But if kids were only using social media to watch gardening videos and learn how to do better in school, no one would be upset about these design features. So this criticism of design is really a criticism of potentially problematic content created by other users on the internet and of a platform’s First Amendment right to curate its platform. Attacking the way platforms curate content is no different than if someone tried to sue a bookstore that chose to stock books with sexual, violent, or suicidal content but didn’t put it on a high enough shelf so it would be out of reach of children. 

Removing Section 230 protections for anything that their algorithms curate or moderate would make it impossible for vast parts of the internet to function, as any effort to personalize a news feed or suggest helpful reviews would create liability. Platforms could never recommend content or help users sort through information in any meaningful way. Platforms might merely offer users a chronological feed that does not curate in any way. But that would just mean that trolls and bots that post spammy or borderline abusive content with great frequency would have their content appear in feeds more regularly. Similarly, child-safe settings would also likely cease to be offered because those settings require the use of algorithms and design features to try to limit certain types of content or interactions with children. By making such safety tools a liability, repealing Section 230 makes kids far less safe on the internet. 

Section 230 Remains Essential 

Section 230 was the cornerstone that allowed the internet to become a massive marketplace of ideas, products, and services. It remains essential today, as without it, companies would stagnate and die under ruinous liability. And the great freedom and power it offered to ordinary people would disappear with it. Policymakers should understand that their proposals to limit or repeal Section 230 will have vast unintended consequences that will leave Americans less prosperous, safe, and free.