Next week, on June 24th, the House Energy and Commerce Committee will hold a joint subcommittee hearing about coronavirus misinformation. The hearing is seemingly intended to highlight social media platforms’ role in hosting false user speech about the ongoing pandemic. However, unless the hearing also addresses the impact of misinformation from official sources, Congress will achieve only a limited understanding of how misinformation hindered our nation’s response to COVID-19.
At the onset of the pandemic, prominent platforms provided millions of dollars in free ad credits to the World Health Organization (WHO) and government health agencies. They also gave these organizations top billing within their products, adding prominent links to Centers for Disease Control (CDC) resources. While this civic‐minded act was intended to increase the reach of trustworthy health advice, early official advice regarding mask use was dangerously wrong.
On February 29th, U.S. Surgeon General Jerome Adams tweeted:
The CDC also discouraged mask use, writing on Facebook that “CDC does not recommend that people who are well wear facemasks to protect themselves from COVID-19 while traveling.” They also tweeted, “CDC does not currently recommend the use of facemasks to help prevent novel #coronavirus.”
On March 8th, in an interview with 60 Minutes’ Jonathan LaPook, Director of the National Institute of Allergy and Infectious Diseases Dr. Anthony Fauci discouraged the general use of face masks, saying; “The masks are important for someone who is infected to prevent them from infecting someone else, now when you see people, and look at the films in China and South Korea or whatever where everyone is wearing a mask, right now in the United States, people should not be walking around with masks.”
Given that the CDC justified a quarantine of Wuhan evacuees in late January by citing the risk of asymptomatic transmission, the suggestion that only those known to be ill should wear masks made little sense. Nevertheless, YouTube users have viewed the section of the interview addressing mask use more than a million times. Although the page now includes a link to current CDC guidance supportive of mask use, the video’s content doubtlessly misled many.
Whether the result of expert incompetence or a misguided “noble lie” intended to preserve masks for others, the effect was the same: Americans were told by their government not only that they should not buy masks, but that masks didn’t work. Individuals responsibly following the official advice refrained from wearing masks, greatly increasing their chances of transmitting COVID-19 to others. In concert with poor advice from other government officials, such as New York City Mayor Bill de Blasio’s tweet on March 2nd encouraging New Yorkers to attend the cinema, the results were lethal.
Unlike conspiratorial publications like Infowars or Zero Hedge or the musings of armchair epidemiologists on Facebook, official misinformation is likely to be widely believed, making it more dangerous. Its endorsement by government gives it a perceived legitimacy that most sources of misinformation lack. When revealed as erroneous, official misinformation gives cover to conspiracy by eroding public trust in expert advice.
Democrats have criticized platform efforts to limit the spread of misinformation as anemic, condemning firms’ failure to fully enforce their community standards. This push poses two dangers. Firstly, platforms cannot simply choose to enforce their standards more efficiently; they review millions of pieces of content per day, often with the aid of sorting algorithms. Mistakes are unavoidable. As such, they must make trade‐offs between false positives and false negatives. If platforms attempt to catch more violative speech, they will inevitably sweep up more innocent speech along with it. Politicians risk silencing their constituents when they put a thumb on this scale.
Secondly, and perhaps more concerningly, most platforms have ostensibly committed to remove coronavirus‐related speech at odds with official health guidance. If the official guidance is wrong, the effective removal of conflicting information leaves little opportunity for the eventual correction of official mistakes.
At a George Washington University Conference on Tuesday, Consumer Protection and Commerce Subcommittee Chair Rep. Jan Schakowsky (D-IL) suggested that our national conversation would improve if platforms were “forced to follow their community standards, and if the enforcement agencies would get to work,” portending a future in which platforms are required to inflexibly apply their stated rules, regardless of the costs. Imagine if, rather than simply paying lip service to the official guidance, platforms had diligently removed calls for general mask wearing because they conflicted with erroneous government advice discouraging mask use. How much longer might it have taken to correct the official line? If #MasksForAll had been removed for contravening official health advice, it would have been harder for tireless advocates of general mask use to share their vital message. We should all appreciate platforms’ restraint in this area.
While the presence of health misinformation on platforms is concerning, bottom up‐misinformation is often unbelievable and frequently removed; more effective removals are not without trade‐offs. Official misinformation presents a greater threat, both because of its perceived legitimacy and the extent to which platforms take their cues from public health officials. If Congress wishes to examine the spread of coronavirus misinformation, before blaming platforms, it must first grapple with the outsize role of official misinformation in delaying the widespread use of face masks.