Twitter has expanded its content moderation policies in an effort to halt the spread of misinformation about Covid‐19, broadening its definition of harm “to address content that goes directly against guidance from authoritative sources of global and local public health information.”
Twitter wants its platform to offer useful information about the virus while excluding harmful misinformation. A noble mission, but because Twitter has either little competence in identifying Coronavirus misinformation or little confidence in its ability to do so, it has chosen to rely on external institutions to guide its content moderation. Twitter both privileges messages from the WHO on the platform “to ensure that when you come to the service for information about Covid‐19, you are met with credible, authoritative content at the top of your search experience” and pledges to remove the “denial of global or local health authority recommendations,” which is understood as a proxy for misinformation.
If this understanding holds true, the policy makes a great deal of sense. It would be costly and difficult for Twitter to maintain its own standard of Coronavirus misinformation, constantly updating it as our understanding of the disease improves. Provided the health authorities’ recommendations are correct, Twitter’s enforcement efforts benefit from the authorities’ legitimacy while their advice improves the quality of Coronavirus information on Twitter.
However, external experts and institutions are still fallible. Their advice may be sound in most instances, but when they get it wrong, Twitter’s reliance on their authority magnifies the damage done by their inadvertent misinformation. If Twitter, acting on an officially endorsed, though incorrect, understanding of best practices in mitigating the spread of the virus, removes information that later proves correct, Twitter’s moderation will suffer a blow to its legitimacy as will the official authority. These false positives may also cause real harm to people denied the correct advice.
This is not a hypothetical problem. The WHO and many Western authorities and experts have discouraged the use of facemasks by the general population, simultaneously claiming that they do little to prevent the spread of the virus and must be conserved for use by healthcare workers. This inherently contradictory stance, further eroded by the experience of Asian countries where mask‐wearing is the norm, amounts to official misinformation. Whether this assertion is the product of a noble lie or simple misjudgment, it has already done tremendous harm. Because of its pre‐commitment to the advice of health authorities, official misinformation of this sort puts Twitter in a bind.
If Twitter wants to counter misinformation on its platform, it should remove tweets by public health authorities discouraging general mask use. However, Twitter has tied its efforts to combat Coronavirus misinformation to the very authorities it must now rebuke. Without an independent justification for its misinformation determinations, Twitter’s moderators are relegated to implementing the determinations of external institutions, right or wrong.
Thankfully, despite its promise to remove denials of health authority recommendations, Twitter does not seem to have suppressed tweets calling for general mask wearing. While this restraint is laudable, Twitter still hosts official misinformation about mask‐wearing and purports to remove information at odds with official advice. Twitter’s moderation practices are out of step with its stated policies. While this may be a better state of affairs than if Twitter zealously removed correct information that conflicts with official advice, it renders Twitter’s moderation more opaque and less predictable or seemingly legitimate.
Contrast this with the evolution of Twitter’s policy on misinformation by state actors. Governments and government officials have long been treated with kid gloves by content moderators. In mid‐March, Lijian Zhao, Director‐General of the Information Department of the CCP’s Foreign Ministry, repeatedly suggested that Covid‐19 had originated in the United States, tweeting links to conspiracy theories about the source of the virus.
This deliberate disinformation was met with calls for Twitter to remove Zhao’s tweets. Twitter refrained from removing the tweets but posted an update to their misinformation policies, stating: “Official government accounts engaging in conversation about the origins of the virus and global public conversation about potential emergent treatments will be permitted, unless the content contains clear incitement to take a harmful physical action.” While Zhao’s tweet is grossly wrong, it is unlikely to do any immediate harm and, in time, might prove a reputational albatross for Zhao and the CCP.
Whether or not you agree with the decision, Twitter’s implicit appeal to immanency gave this rule an independent legitimacy that reliance on WHO advice lacks. A line drawn between speech likely to cause immediate harm and speech that might cause harm down the road doesn’t rely on an appeal to expert authority. Twitter made a decision on its own with reference to a universally appreciable value and followed up by removing two tweets from Brazilian President Jair Bolsonaro in which he disputed his government’s social distancing guidance. Unlike Lijian Zhao’s tweet, Bolsonaro’s advice risked causing immediate harm, rather than long‐term international animus and, in actually enforcing the policy against a covered behavior, Twitter demonstrated that its policy update was not merely a way to avoid upsetting the CCP.
The difficulty of the situation notwithstanding, what might Twitter and other social media learn from COVID-19? Like other platforms, Twitter outsourced “truth” and “the facts” to external authorities. Yet when some users advocated wearing masks contrary to those authorities, Twitter did not remove their posts. Someone at Twitter decided, their outsourcing notwithstanding, to tolerate talk favoring wearing masks. Similarly, Twitter tolerated a falsehood from a Chinese government official because it was unlikely to cause immediate harm. Both decisions to tolerate posts that contradicted expert beliefs came from Twitter rather than external authorities.
Our lesson? External expertise is not enough to legitimate content moderation on social media. Twitter needs to recognize and justify the values that inform its use of expert knowledge. Absent such transparency and justification, Twitter’s moderation will lack the last full measure of legitimacy.