Section 230 of the Communications Decency Act is the provision at the heart of this controversy. Passed in 1996, this law makes it possible for social media and usergenerated content to be hosted without fear of liability. Simply put, you can sue somebody for defaming you on Twitter or Facebook, but you can’t sue Twitter or Facebook because they failed to block or delete this content. Without this shield, no company could take the liability risk of allowing usergenerated content. Crucially, the law also provides that companies do not lose this protection because they do take steps to moderate content under their own private standards, which was intended to be encouraged.
Recently, however, this law has become the target of politicized complaints, with President Trump and former vice president Joe Biden both advocating a greater role for the government in policing online speech.
In March, Cato senior fellow Julian Sanchez organized an all‐day conference, “Return of the Gatekeepers: Section 230 and the Future of Online Speech,” which brought together experts on law and technology to discuss this new threat and what can be done about it. Nor is this new territory for Cato, which has long sought to defend Section 230.
John Samples, vice president at Cato, has also been directly involved in helping social media companies adopt effective content moderation policies, with the goal of keeping that task firmly in private hands rather than outsourcing it to government censors. In May, Samples was picked as one of 20 eminent notables and free speech scholars to join Facebook’s new Oversight Board, an independent limited liability company created to hear appeals on content moderation for Facebook and its subsidiary Instagram.
Companies like Facebook and Twitter face a difficult challenge in striking the right balance between providing open access for all points of view and holding true to their own policies against truly objectionable content that would drive customers away. As private companies, they are free to experiment and find that balance, and should be able to do so without legal threats or government coercion. Because their websites are ultimately private property, they can set their own standards of behavior just like any other private club or organization. Section 230 makes that possible, whether the person whose content is being moderated is a hateful internet troll or the president of the United States.