COVID Has Brought with It Another Fierce Malady: The Scourge of Motivated Reasoning

When we experienced the summer reprieve, they declared the pandemic over, arguing a second wave was a figment of Boris Johnson’s imagination (it wasn’t).

February 4, 2021 • Commentary
This article appeared on UK Telegraph on February 4, 2021.

How well can people detach themselves from their personal interests or prior views when evaluating evidence? In the Nineties, economists Linda Babcock and George Loewenstein attempted to find out by running an experiment where they gave their subjects evidence from a real motorcycle accident legal case.

The economists divided participants into two groups. Half played lawyers for the plaintiff, and half acted as lawyers for the defence. Participants were given a financial incentive to argue their case well and win a favourable financial settlement. Separately, they were offered a monetary reward if they could guess accurately the value of the real‐​world settlement. In theory, which side people represented in the experiment should have had no bearing on their guesses. But the economists noticed something interesting: the role‐​playing plaintiff lawyers’ settlement estimations were significantly higher than the defence lawyers’. All had seen the same evidence, but the groups came to different conclusions based on their interests and the view of the case they’d rehearsed.

This is a good example of “motivated reasoning”, a psychological phenomenon that often pollutes policy debates. There are certain things we want to be true given how we think the world works or what’s in our interest. This can lead us to treat evidence that counters our beliefs with extreme scepticism, while embracing dodgy claims that chime with our priors.

Academic studies even show we become more resistant to contrary evidence the more knowledgeable we are. It’s a pernicious form of reasoning, both because it is difficult to catch yourself doing it and because it has never been easier to engage in. With so much information online, there is always some factoid you can find to support your case, even when it is objectively absurd.

The pandemic has seen motivated reasoning flourish, particularly among a small subset of the more radical “lockdown sceptics” in the UK. Lockdown scepticism initially arose as a principled objection to the state’s emergency pandemic powers, or from the belief that the costs of early lockdowns exceeded any benefits.

One can reasonably disagree about the role of government in emergencies. Debate about the very real policy trade‐​offs that Covid‐​19 and the response to it have brought is healthy. But a small core of sceptics have gone much further. They have embraced any speculative theory or wishful thinking that downplays the pandemic’s seriousness, because those “truths” assist the anti‐​lockdown policy cause.

As deaths spiralled last spring, these hardcore sceptics suggested there was a big discrepancy between the number of people dying “with” or “of” Covid‐​19 (there wasn’t). In April 2020, they latched on to Oxford academics’ modelling suggesting that the UK might have already been close to achieving herd immunity (it wasn’t).

When we experienced the summer reprieve, they declared the pandemic over, arguing a second wave was a figment of Boris Johnson’s imagination (it wasn’t). As cases picked up, they claimed the uptick was being driven by “false positive” test results (it was not.) Ignoring evidence of delays between infections and deaths, they then asked “where are the deaths?” (deaths then rose dramatically). Now, some of them claim the high excess deaths we’ve seen recently are primarily caused by lockdowns (they are not).

The concern about “false positives” highlights the motivated reasoning best. It’s trivially true that if nobody had Covid‐​19 and we tested people for the disease, 100pc of any positives would be false positives. Former Pfizer employee Mike Yeadon, using similar logic, suggested that if the real disease prevalence was similarly extremely low, even a test with an accuracy of 99pc for telling uninfected people they were truly negative would result in 90pc or more of any positives we found being “false.” There’s nothing wrong with that hypothetical. But it was completely irrelevant to our situation. Office for National Statistics data show diagnostic tests are more accurate, correctly identifying “negatives” at least 99.9pc of the time, not 99pc. That’s corroborated by evidence from New South Wales, Australia, which found tiny numbers of positive cases (0.069pc) during mass testing on a population where little Covid‐​19 was suspected.

As the disease took off here in late autumn, positive test results increased at a faster rate than testing. Illness and death, of course, were evident everywhere, showing that the prevalence of the disease was not low. So false positives were and are a non‐​issue. Yet this small minority of extreme sceptics embraced the hypothetical as truth, pushing the 90pc “false positive” narrative because it was a convenient factoid.

In his excellent book, How to Make The World Add Up, economist Tim Harford delineates 10 lessons to help us better understand new statistics or data. His very first lesson is to consider how a new fact makes you feel. The ease with which they glide from one factoid or theory to the next suggests that some pandemic sceptics continually feel “vindication”. This alone should ring alarm bells.

In a free society, the best we can do is have others provide the explanations and context for statistics that Harford’s book teaches us to be competent in ourselves. 

Some people, unfortunately, never want to accept that they got it wrong. When confronted with their past statements, they claim they were “just asking questions” or that republishing their own words while explaining their errors amounts to attempts to “silence” them.

But quoting people’s words back to them is a healthy feedback mechanism for error correction in a free society — the very essence of free speech. The claim that taking on dodgy statistics and theories is evidence of “silencing” people is, again, motivated reasoning.

About the Author
Ryan Bourne

R. Evan Scharf Chair for the Public Understanding of Economics