Public health officials should
- communicate that policy decisions by their very nature cannot be made solely on the basis of scientific evidence; they will always involve normative questions and tradeoffs of values;
- communicate that the “science” is rarely so clear that the wise policy decision is self‐recommending and that even when science is clear and decisions seem straightforward, scientific knowledge can change because of new evidence; and
- communicate that the first two points are especially true with the COVID-19 pandemic, given how little we know and how much of the evidence is in flux.
In medical and environmental policy, scientists play prominent roles in decisions. Agencies such as the U.S. Department of Health and Human Services and the Environmental Protection Agency have scientific advisory councils that review the relevant scientific literature and advise policy decisionmakers about pollution exposure standards and pharmaceutical and medical device safety. When the decisions of governmental officials do not follow scientific recommendations, critical news coverage follows. The implication is that “science” is sufficient for policy decisions and that “politics” should not play a role.
The discussion about science and politics is occurring during the COVID-19 pandemic. A recent New Yorker article lauded Iceland’s response to the pandemic because the prime minister deferred to scientists in her decisions: “It was very clear from the beginning that this was something that should be led by experts—by scientific and medical experts.” In the United States, 57 former scientists and public health officials issued a statement calling for a science‐based approach to the pandemic. The signatories said, “Sidelining science has already cost lives, imperiled the safety of our loved ones, compromised our ability to safely reopen our businesses, schools, and places of worship, and endangered the health of our democracy itself.”
But scientific findings, by themselves, are rarely sufficient for individual or policy decisions. Such findings can tell us about the causes of outcomes but nothing more. The question of how we should evaluate those outcomes in our own decisionmaking requires other considerations, such as costs, benefits, and philosophical or religious values. And the relative importance of those considerations will vary across individuals. Finally, aggregating those individual differences into collective policy decisions involves even further nonscientific choices about the relative importance of different individual preferences about outcomes. So even under the best circumstances, policy decisions involve more than science, and in the case of COVID-19, our scientific knowledge is very limited.
Even Science Is Not Just Science
What is science? Science is an ongoing discussion about the testing of hypotheses of cause and effect through experiments and comparison with the possibility that random variation produced the same outcome.
Researchers can be either too cautious or too cavalier about concluding that the results of inquiry reflect real cause and effect rather than random variation. Confidence that results reflect real cause and effect increases with replication by other experimenters and the magnitude of the result relative to the number of data points under study.
How cavalier or cautious should scientists be about their results? Ironically and importantly there is no scientific answer to this question. Instead there are only adopted conventions. Scientists are usually reluctant to say a result is “real” rather than the result of normal variation around zero effect unless they are 95 percent confident that the result did not arise simply because random variation happened to produce an outcome that appears to be a “real effect.” But even if a scientist is 95 percent confident, 5 percent of the time the observed result arises simply through sampling variation rather than a “real” effect. And 95 percent confidence is a convention.
How confident one should be in hypothesis testing is a value choice and not scientifically determined. Experimental physicists, for example, take the concern for avoiding “false positive” results to the extreme. They use what is referred to as the five‐sigma rule, which allows a result to be considered “real” only if the probability of a false positive result is less than 1-in-3.5 million, which translates to 99.99997 percent confident. If medical science adhered to such a rule, there would be no accepted results.
Regardless of the false‐positive acceptance rate a researcher chooses, the number of observations dictates our ability to differentiate a small result from no result. In the context of any medical treatment, including vaccines or antiviral medication for COVID-19, our ability to declare the treatment “safe” after clinical trial results depends on the number of subjects in the trial (as well as representativeness of the participants). Side effects that affect only small percentages of the population will manifest themselves with 95 percent confidence only after the medication is widely used because clinical trials have thousands rather than millions of participants.
Table 1 shows how much larger (in percentage) that the harmful effect of a medication or vaccine would have to be in the experimental group relative to the control group to allow us to state with 95 percent confidence that the negative effect is the result of the medication rather than random variation. If scientists or policymakers wanted to ensure that the negative side effects of a vaccine affected only 0.1 percent of those who received it, the trial would require 2.9 million people. While 0.1 percent might seem small, 0.1 percent of the U.S. population is 330,000 people. Thus, a vaccine trial with almost 3 million people that passed a clinical trial test with 95 percent confidence would not preclude the possibility that universal administration of the vaccine would have negative side effects on 330,000 people. Whether that is acceptable is not a scientific question.
Sometimes Science Is Much More Than Science
Scientists occasionally interject values into their recommendations—not in the pervasive, subtle, and unavoidable manner previously described but in obviously avoidable ways that undermine their role as neutral providers of information. A prominent example occurred when 1,300 public health officials signed a May 30, 2020, letter of support for the public protests in the wake of the May 25 death of George Floyd in police custody in Minneapolis. But the same officials had earlier condemned public protests against mandatory business closures.
Why the difference? A New York Times article asked, “Was public health advice in a pandemic dependent on whether people approved of the mass gathering in question?” According to the article, “To many, the answer seemed to be ‘yes.’” Mark Lurie, a professor of epidemiology at Brown University said, “Instinctively, many of us in public health feel a strong desire to act against accumulated generations of racial injustice. But we have to be honest: A few weeks before, we were criticizing protesters for arguing to open up the economy and saying that was dangerous behavior. I am still grappling with that.”
To his credit, Lurie recognized the failure to separate his role as a scientist from his role as a citizen with views about public policy after he took his daughter to a protest early in June. “We felt afterward that the risk we incurred probably exceeded the entire risk in the previous two months,” he said. “We undid some very hard work, and I don’t see how actions like that can help in battling this epidemic, honestly.”
Luckily, it appears little damage was done by this crossing of the boundaries between science and politics. A recent paper using cellphone tracking data shows that cities with protests saw increased social distancing compared to cities that did not have protests—presumably because nonprotestors changed their behavior. And net COVID-19 case growth did not differentially increase in those cities that experienced protests.
Some Scientific Results Lead Easily to Decisions
Though by itself “science” cannot dictate our personal or policy choices, in some cases the information it provides can make those choices rather clear with few additional considerations. If the benefits or harms of a medical decision (such as taking a medicine or vaccine) are large and discontinuous (an abrupt change in outcome with respect to exposure or dose), even with the confidence‐interval and sample‐size qualifications previously described, then “science” leads fairly easily to decisions. If the harm from a medication or vaccine increases abruptly with the dose and the benefits do not decrease abruptly below that dose, then the appropriate dose is below the threshold at which side effects appear.
In the COVID-19 pandemic, the wearing of masks seemed initially to fall into the category of decisions that follow simply and directly from the science. The problem is that new evidence led scientists to change their understanding of viral transmission. At the start of 2020, scientists thought that coronavirus transmission occurred only from people exhibiting symptoms, such as its genetic cousin Severe Acute Respiratory Syndrome (SARS) does. Thus, general mask wearing was a waste of resources and reduced those supplies available to those dealing with active infections. Hence, the early universal public health advice was not to wear masks.
But evidence accumulated that asymptomatic COVID-19 transmission was real and large; 35–60 percent of infections cause no symptoms. Thus, the advice to stay home if you’re sick may have been insufficient. More aggressive measures, such as ordering healthy people to wear masks, may have been necessary.
The transition from masks are not required and not helpful to masks are required and you are irresponsible if you do not wear one was not easy even for scientists. European public health scientists resisted the claims of asymptomatic transmission. As the New York Times reported:
Sweden’s public health agency declared that [the original journal article reporting asymptomatic transmission] had contained major errors. The agency’s website said, unequivocally, that “there is no evidence that people are infectious during the incubation period”—an assertion that would remain online in some form for months. French health officials, too, left no room for debate: “A person is contagious only when symptoms appear,” a government flyer read. “No symptoms = no risk of being contagious.”
Science is always a conversation about current knowledge and new results. And science is by its nature conservative in that it worries greatly about accepting new or unusual results because they may be the result of random error or mistakes. But sometimes that conservatism is wrong. And that appears to have been the case with asymptomatic coronavirus transmission and the utility of generalized mask wearing. Researchers have not conducted trials, but when people wore masks in a seafood plant and on a cruise ship, the proportion of severe cases decreased dramatically, reducing hospitalizations and deaths.
All nonscientists have noticed, however, that the surgeon general of the United States has dramatically reversed course. In late February he tweeted, “Seriously people‐ STOP BUYING MASKS! They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!” By early July he told NBC’s Today, “As we talk about Fourth of July and independence, it’s important to understand that if we all wear these, we will actually have more independence and more freedom because more places will be able to stay open. We’ll have less spread of the disease.”
This reversal, along with the contradictory advice of epidemiologists toward demonstrations, has left many people skeptical of experts and expertise.
Considerations Other Than Scientific Results
Absent an abrupt change in benefits or harms, recommended behaviors do not depend on the science alone; other considerations and their importance are relevant, including economic costs and benefits as well as religious or philosophical values. Policy disagreements are largely about the differing weight that people assign to these other considerations. Science does not tell us how to weigh incrementally increased risk of COVID-19 transmission against other values, such as freedom to operate a business or pursue one’s livelihood. This statement doesn’t refer only to the daily fight between supporters and opponents of President Trump regarding the value of returning to a “normal” economy. John Jenkins, president of the University of Notre Dame, made an argument about the role of values in the context of deciding to reopen the university during the COVID-19 pandemic.
If we gave the first principle [to protect the health of our students, faculty, staff and their loved ones] absolute priority, our decision about reopening would be easy. We would keep everyone away until an effective vaccine was universally available.
However, were we to take that course, we would risk failing to provide the next generation of leaders the education they need and to do the research and scholarship so valuable to our society. How ought these competing risks be weighed? No science, simply as science, can answer that question. It is a moral question in which principles to which we are committed are in tension.
Even under the best of circumstances, “science” can lead easily to policy choices only if the relevant research results are large and discontinuous and if there are no other competing considerations. In all other circumstances, decisions follow from “science” only with the addition of values that allow us to assess the relative importance of outcomes. And because individuals weigh these other considerations differently, collective choices involve conflict about how to aggregate individual differences in the evaluation of those outcomes.
Knowledge about Infectious Respiratory Diseases Is Limited
The use of “science” as a guide to COVID-19 policy is even more complicated because our scientific knowledge about infectious respiratory diseases is limited. The data analytic site FiveThirtyEight put it more bluntly: “Why It’s So Freaking Hard to Make a Good COVID-19 Model.” In the 1918 influenza outbreak, why did the spring wave go away, and why did it come back in the fall? Michael Osterholm, an epidemiologist at the University of Minnesota, says, “We don’t know.” In 2003, the World Health Organization (WHO) feared that SARS would return in a devastating wave that fall, but instead it was extinguished. In 2009, experts worried that the H1N1 flu would be severe, but it was not. “You’ve got to have a lot of humility with these viruses,” Osterholm said. “I know less about viruses than I did 10 years ago.”
Dr. Peter Piot, the director of the London School of Hygiene and Tropical Medicine, is a legend in the battles against Ebola and AIDS. But he misjudged the coronavirus, “I underestimated this one—how fast it would spread. My mistake was to think it was like SARS, which was pretty limited in scope. Or that it was like influenza. But it’s neither.” If he didn’t know how to react to the coronavirus in real time, who would?
On February 20, when the WHO was reporting the existence of 79,748 cases of COVID-19 worldwide, Philip Tetlock’s superforecasters (people with alleged expertise) thought that the probability of more than 200,000 cases of COVID-19 being reported by the WHO only one month later on March 20 was about 3 percent. The actual number that the WHO reported on March 20 was over 266,000.
Not only is our scientific understanding of respiratory viruses limited, but the coronavirus has characteristics that make the use of scientific inferences about it as a guide to policy even more difficult. First, given the prevalence of asymptomatic transmission, stopping transmission by confining only those who are sick is not effective because so many people do not know they are infected. Second, the false negative rate for the polymerase chain reaction active virus tests is about 33 percent and maybe more among the asymptomatic. Thus, even among those who are tested, a third have the virus but are declared not to have it and can transmit it to others. Third, active virus tests also generate false positive results because of the presence of genetic fragments from the virus but no actual infection. Taia Wang, a viral immunologist at Stanford University, told the New York Times, “We really need to know, how long does it take the body to clear the virus? How long are people contagious? We don’t know the answer to that.” This happens with viruses. Genetic material from the measles virus can show up in tests six months after the illness. And genetic fragments of Ebola and Zika viruses are known to persist even longer in the body. The implication of these three characteristics of COVID-19 is that figuring out in real time which people to isolate from whom to stop viral transmission is very difficult.
Even if we could identify all infected individuals in real time and resources existed to follow them and their contacts (and U.S. attempts at contact tracing seem extremely ineffective), many contract tracing exercises would find no infections. During the Middle East Respiratory Syndrome (MERS) outbreak in South Korea in 2015, 89 percent of patients did not appear to transmit the disease. In the COVID-19 pandemic this year, a husband and wife in Illinois both became gravely ill and were hospitalized. Both recovered. State public health officials traced their contacts—372 people, including 195 health care workers. Not a single contact became infected. A new study suggests that the coronavirus arrived in places around the world more than once without starting runaway outbreaks. In these cases, there was little or no transmission, and the virus simply died out.
So the belief that expertise in infectious viruses provides obvious and clear answers for policy is misguided. And that is why we observe so much variation worldwide in the modeling recommendations of epidemiologists. Should decisionmakers rely on the model developed at the University of Pennsylvania or the model developed at the University of Washington? The former suggested that the District of Columbia would need 1,453 ventilators; the latter suggested it would need 107. Neither England nor Sweden initially implemented lockdown orders. But on March 16, 2020, Imperial College London published a model that predicted over 500,000 deaths in the United Kingdom, and it embraced a lockdown while Sweden did not. Japan limited testing to only the most severe cases even though it has the oldest population of any country in the world (unless you count Monaco), and the government never forced businesses to close. Yet Japan’s reported COVID-19 mortality rate is low. If science is the answer, which scientists do you listen to when they disagree?
So Why Does Everyone Invoke Science as the Answer?
Even though perfect science is rare, and even then it rarely translates directly into decisions, and even though infectious disease knowledge, in general, and coronavirus knowledge, in particular, is strikingly incomplete, why is science invoked so often as the answer? In the language of economics, the invocation of science is the equilibrium outcome of the interaction of public officials, scientists, and the public.
First, science serves the interests of public officials. Science legitimizes policy decisions and reduces discussion about them. Most people don’t understand science; most people don’t understand statistics; most people don’t understand what scientists do, how they argue, or what the scientific method is. In some ways, the “scientific community” is akin to a modern version of the priesthood.
Scientists wear lab coats instead of vestments, but like clerics, they have the authority that comes with access to knowledge unavailable to laypersons. Insisting that we yield to their judgment can be very useful in policy debate because it elevates some policy preferences relative to others. Instead of having to say that you would like the policy outcome to reflect your preferences rather than someone else’s, you invoke science: “‘Science’ has concluded that my preferences are legitimate and that your preferences are out of bounds.” Robert Pindyck describes this in the context of climate science: “The use of a complex IAM [Integrated Assessment Model] or related model throws a curtain over our lack of knowledge, and creates a veneer of scientific legitimacy that suggests we know more than we do.”
Second, for some scientists, expertise is power. For many, science is simply the investigation of how the world works and nothing else. Stanley W. Trimble, professor of geography emeritus at the University of California, Los Angeles, said, “I learned early on to avoid academic bandwagons of any sort and that true scholarship can scarcely abide political causes.” For some, scientific expertise allows you to lead and others to follow analogous to the role that priests played for rulers in pre‐modern civilizations. In the words of Roger Pielke, who studies the role of science in public policy, “For those with scientific expertise, it consequently makes perfect sense to wage political battles through science because it necessarily confers to scientists a privileged position in political debate.”
Third, for the public, following expertise is a rational response to the complex division of labor that exists in the modern world. Each of us knows a tiny slice of the world very well and very little about anything else. We all rely on others’ knowledge except for the very few things we have mastered. In addition, when decisions involve important matters such as health, relying on experts can relieve the anxiety associated with important decisions.
Finally, in the current pandemic, favoring “science” has become shorthand for saying, “I worry about COVID-19 more than I do about the economy or freedom or anything else.” It has also become shorthand for saying, “I don’t like any other approach to the pandemic including President Trump’s.” But it seems no one criticized the president of Notre Dame as being anti‐science because he decided to reopen the university even though he wouldn’t have if minimizing the risk of infection had been his only objective.
At its best, science explains relationships between cause and effect: no more and no less. No normative conclusions about individual or collective decisions follow directly from science. Instead costs, benefits, and other values properly enter both individual and collective decisions.
Sadly, science about infectious respiratory diseases, in general, and the coronavirus, in particular, is limited, creating more difficulties in the use of scientific understanding to inform decisions. Leading virologists have been very open about how little we know about the 1918 flu pandemic as well as SARS and MERS and how their initial views about coronavirus transmission were anchored in their understanding of SARS and MERS, which turned out to be incorrect for COVID-19.
For this coronavirus, about 35–60 percent of infections cause no symptoms. The conventional wisdom at the start of 2020 was that could not possibly be true. And thus, experts advised against masks except for those in obvious infection settings. But now asymptomatic infection has quickly become the new conventional wisdom, and wearing a mask is a wise choice. To scientists this is just good updating given new information, but this abrupt change has caused many nonscientists to give up on the idea of expertise.
In addition to asymptomatic transmission, the test for the virus has large false negative rates. And molecular evidence suggests that the virus arrived more than once in the past year with little or no transmission. The combined result of all the characteristics of COVID-19 is that in real time, identifying whom to isolate to reduce viral transmission is very difficult.
The essential problem with the role of science in public policy is that some scientists, most politicians, and the public want science to do more than it can. Some scientists want to tell others what to do. Expertise is not only information; for some it is power. Most politicians want to hide behind the veil of science so that they do not have to openly discuss why they favor some outcomes over others. It is much easier to say, “The experts made me do it.” And finally, the public and journalists often want “answers” particularly about “safe” and “unsafe” rather than nuanced statements about likely changes in health outcomes given various behaviors or exposures. Science actually can tell us more when the pressure for it to answer all our questions is less.