Deadly misinformation spread across social media long before COVID-19 emerged, but amid the ongoing pandemic attempts to tackle such content are once again in the limelight. These efforts provide an opportunity for classical liberals to emphasize the importance of freedom of association and to prepare for discussions about how private institutions handle misinformation amid a crisis.
Too often we think of the freedom of speech to be a freedom that protects speakers from government censorship. And while the freedom to speak is a necessary condition for a functioning liberal society it’s not the only freedom implicated in what people refer to as “the freedom of speech.” The freedom of speech also entails a freedom for publishers and platforms to associate with whomever they want. That The Wall Street Journal is free to reject an op‐ed submission written by the leader of the American Nazi Party is as important a freedom as the freedom of the leader of the American Nazi Party to write the op‐ed in the first place.
The Internet has prompted a revolution unlike anything seen since the invention of the moveable type printing press. Billions of people are able to not only express themselves but form communities of like‐minded people across national boundaries. Fortunately, the widespread availability of venues for online speech has not been accompanied by obligations on the part of Internet companies to host speech they find repellent or dangerous. The online site Medium, for example, removed a controversial essay by Aaron Ginn apparently because they did not wish to be associated with it. In the U.S., Internet companies are shielded from liability for actions associated with removing content.
The freedom of private companies to disassociate from speech that they consider harmful is especially important during the current crisis. Social media companies have implemented a variety of policies aimed at dealing with COVID-19 misinformation. Twitter has expanded its definition of “harm” to include content that is contrary to health information provided by global and local authorities. Facebook established its COVID-19 Information Center and committed to removing content that could “contribute to imminent physical harm.” Twitter and Facebook joined Google, Youtube, Reddit, Microsoft, and LinkedIn to issue a statement on COVID-19 misinformation, stating that they are “combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world.”
These policies have affected heads of states, publications, and individuals. Facebook and Twitter removed videos of Brazilian President Jair Bolsonaro falsely claiming that the anti‐malaria drug hydroxychloroquine was an effective remedy. Twitter also removed a tweet posted by Venezuelan President Nicolas Maduro that claimed a homemade brew could be effective against the coronavirus.
In the U.S publications and pundits have seen their content removed. The Federalist published an article calling for intentional infection gatherings akin to “chickenpox parties.” Twitter locked The Federalist’s account in response. President Trump’s personal attorney Rudy Giuliani, like President Bolsonaro, supported the hydroxychloroquine remedy in a tweet quoting a young conservative activist. Twitter removed the tweet. Conservative pundit Laura Ingraham had to remove a similar tweet in order to avoid her Twitter account being suspended.
Social media companies are not the only Internet‐based firms attempting to stop the spread of dangerous information hurting customers. Amazon is attempting to remove scams associated with the ongoing pandemic, removing more than one million products so far.
Social media companies are often relying on other organizations such as government agencies or the World Health Organization as proxies for content moderation and fact‐checking. While there are certainly advantages to such an approach, it is not without risks, as my colleague Will Duffield has explained.
Popular Internet companies ought to be free to take steps to tackle COVID-19 misinformation. The spread of bogus claims about cures can result in death. But at a time when official organizations have reversed recommendations on the wearing of face masks we should prepare for a breakdown of these organizations’ reputations to affect the perceived legitimacy of Internet companies’ content moderation decisions. Amid misguided calls to breakup so‐called “Big Tech” and to amend the law that allows for social media companies to moderate content without fear of liability we should be especially wary of such an outcome.
Special thanks to Cato Institute intern Stephanie Reed and Cato Institute Research Associate Rachel Chiu for their research for this post.