Misinformation is widely viewed as one of the most serious challenges facing modern societies. Together with the related terms “disinformation” and “malinformation,” nearly every present-day issue involves claims of misinformation. As a result, a host of actors have dedicated significant amounts of resources to the problem of incorrect or misleading information, especially online.

Such efforts, however, have raised serious issues that this paper will address. The key findings are as follows:

  1. Misinformation is poorly defined and commonly used to refer to claims that are not objectively false but subjectively considered misleading.
  2. The prevalence and impact of online misinformation is vastly misunderstood and overstated. Meaningful misinformation is still a relatively small amount of content online and is concentrated in communities already predisposed to believe it.
  3. The panic over technology-powered misinformation is little different from moral panics throughout history in which elite institutions and interest groups feared giving greater expression to average people.
  4. Rather than top-down approaches to these contentious topics, tech companies may be better off embracing greater user control, more decentralized moderation, and tools that prioritize intellectual humility.
  5. Americans should renew their belief in free expression as the most powerful tool to discover truth, debate hard topics, and counter bad speech with good speech.

Introduction

Misinformation is persistently mentioned as one of the major threats in the world today. It was Dictionary.com’s word of the year in 2018, and in 2024 the World Economic Forum’s annual Global Risks Report found misinformation powered by artificial intelligence (AI) to be the most severe threat facing the world over the next two years—greater than even active wars or threats of war, climate and weather events, or economic volatility. Thousands of books and research papers are published every year discussing the challenge of misinformation. In a 2023 Pew Research Center poll, 55 percent of Americans felt that the federal government “should take steps to restrict false information online, even if it limits freedom of information,” an increase of 16 percentage points in just five years.1 The discussion around misinformation is truly ubiquitous.

And the concerns about misinformation, disinformation, and malinformation (MDM) extend to nearly every domain of human knowledge.2 Is misinformation regarding climate change stopping proper policies? To what degree is misinformation or disinformation responsible for ethnic and racial conflict? How did MDM affect the health and well-being of individuals during the COVID-19 pandemic? What role is disinformation having in debates over foreign policy decisions? Is MDM undermining elections and democracy itself? How does the average person know what is true and false in their daily life?

It is notable that the perceived crisis posed by misinformation first spiked after the June 2016 Brexit referendum and the November 2016 election of Donald Trump, times during which many people would challenge the conventional wisdom. The use of the term “misinformation” in English-language books tripled from 2015 to 2022 after seeing slow, gradual increases and decreases over the past two centuries.3 Academic articles on misinformation spiked in the past fifteen years, with one study finding around 100 articles published per year on three major databases prior to 2011 but around 4,000 articles in 2021 alone.4 A similar review found that prior to 2017, there were only 73 academic articles available on Google Scholar with “fake news” in the title; from 2017 to 2019, there were 2,210.5 Google searches for “misinformation” would spike and remain high starting in February 2020 as the COVID-19 pandemic spread across the world.6

It should be unsurprising, then, that the growth in research and interest regarding misinformation has been accompanied by a similar growth in organizations focused on combatting it. Within academia, many universities set up significant institutes and projects to research misinformation, notably the University of Washington’s Center for an Informed Public, Brown University’s Information Futures Lab, Harvard’s Misinformation Review, and the now-closed Stanford Internet Observatory. Think tanks and civil society organizations coordinated responses, elevated the problem to policymakers, and proposed solutions.

In addition, fact-checking emerged as a subspecialty of journalism. While all good journalists engage in fact-checking to ensure accuracy, convey complex stories, and build trust with their readers, fact-checkers emerged as a specific kind of journalist dedicated exclusively to acting as an arbiter of truth. These new experts would be embraced, funded, or hired by academia, legacy media, and social media, while governments recommended or even mandated the use of fact-checkers.7 And of course, governments themselves dedicated significant portions of their power and budgets, including security, health, communications, and various other agencies, to countering the presumed scourge of misinformation. Many experts frequently refer to the threat posed by misinformation in existential terms. The Global Risks Report, noted above, describes the threat of AI-powered misinformation and disinformation as “rapidly accelerating,” adding that it “could seriously destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes.”8 The term “infodemic” has been popularized, referring to the wide spread of false or misleading information that results in harmful outcomes around the world, especially during other conflict, disaster, or emergency situations.9 Similarly, there has been wide usage of terms such as “post-truth society” or “epistemological crisis.”10 Even when not described in civilization-threatening terms, misinformation is still commonly considered a harm that can be objectively measured and addressed.

But despite the weighty and scientific language used to describe misinformation, various parts of society commonly use these terms in ways that indicate a lack of definitional precision or scientific rigor. It is to this difficulty of understanding and defining misinformation and its affiliated terms that this paper will turn first.

What Is Misinformation?

Misinformation and its related terms are defined in a variety of ways by a variety of actors. Governments and international organizations frequently define the different elements of MDM as part of their efforts to counter or stop it. Table 1 shows some of the ways that such authorities have defined these terms.

Many government and international bodies explicitly recognize that there are significant differences among the MDM definitions. This variance is similarly reflected in academic research on misinformation, as shown in Table 2.

Most definitions generally distinguish misinformation from disinformation by saying that disinformation includes an intention to deceive or cause harm. That is relatively uncontroversial and straightforward in theory, though determining when false content was meant to deceive is not always so simple in practice.

Take for example the prominent counter-disinformation dashboard Hamilton 68, which claimed in 2017 to have found a network of over 600 Russian-linked Twitter accounts connected to influence operations in the United States. These included Russian bots, trolls, agents, or other accounts that had been heavily influenced by and were spreading Russian messaging and attempting to influence mostly American political and social issues on Twitter. The dashboard was run by and affiliated with major national security leaders and experts and was cited in hundreds of major news stories. However, when Twitter employees investigated Hamilton 68’s claims, the accounts in question were “neither strongly Russian nor strongly bots.”11 Twitter’s head of trust and safety, Yoel Roth, wanted to challenge Hamilton 68, saying that “real people need to know they’ve been unilaterally labeled Russian stooges without evidence or recourse.”12 While Twitter did not end up confronting Hamilton 68, it goes to show how a major disinformation research program inadvertently showcased the challenge of effectively differentiating between foreign disinformation and domestic misinformation and disfavored views.13

Perhaps the most central challenge in defining misinformation and disinformation, though, is whether to cover out-of-context, disputed, incomplete, or otherwise misleading information that is not demonstrably false. The veracity of information might be thought to lie on a spectrum. On one side, you have information that is provably true because of scientific experimentation, sufficient evidence, and valid, well-caveated logical arguments. On the other hand, you have provably false information because the evidence, data, and logic can show anyone why a statement is false. In between these two extremes, you have a lot of information that is not demonstrably false or true and about which people disagree. This information may present an incomplete set of data or facts as the truth, and without proper context. Information may be framed by narratives that present it in a certain light. Information may be partially right and partially wrong. Information may not always be provable or disprovable, either because the issue is still developing or it represents opinion. Some experts would include hyperpartisan news, rumors, clickbait, or even parody and satire as also potentially misleading misinformation in ways that run significantly counter to long-established norms of free expression.14

Some definitions of misinformation and disinformation limit themselves to provably or demonstrably false and inaccurate information, but many others include the spectrum of misleading but not false information. Indeed, this seems to be the most common way most people understand and use the term. Including the spectrum of misleading information within a definition of misinformation, however, dramatically increases both the quantity and types of information that can be considered misinformation. Under the restrained definition of misinformation, we are dealing only with statements we can prove false or true—the Earth is flat, COVID-19 vaccines have microchips in them, over 50 percent of those who get COVID-19 will need to be hospitalized, supporters of Donald Trump did unlawfully enter the Capitol on January 6, a would-be assassin did shoot at and injure Trump, and so on. Once the definition is broadened, however, nearly every opinion, statement, viewpoint, and piece of reporting may fall under the MDM umbrella. Just take the following statements about COVID-19:

  • COVID-19 started in a lab in Wuhan.
  • New study finds that COVID-19 vaccines provide little long-term protection.
  • COVID-19 vaccines present a higher-than-necessary risk of myocarditis, especially in young men.
  • I had a bad reaction to the COVID-19 vaccine and want to warn others before they get it.
  • Podcaster takes ivermectin, horse dewormer, to treat COVID-19.
  • COVID-19 justified the prolonged shutdown of schools.
  • He got the COVID-19 vaccine then died 48 hours later—coincidence?
  • I got COVID-19 and it was no worse than the flu.
  • Natural immunity is more [or less] valuable than a vaccine.
  • Vulnerable teachers and students are at risk if schools reopen.

Each of these statements has been or could be considered misinformation for being misleading but not objectively false. They present policy preferences, opinions, values, personal experiences, unverifiable claims, incomplete information, an issue framed from only one perspective, a lack of context, and so on. Regardless of how you view each of these claims, they all suffer from the same set of imperfections that is common across all discussions and debates on every issue. Nearly every op-ed, news article, blog, social media post, in-person discussion, or other form of communication involves some of these misleading elements. Under a broad definition of misinformation, most human communication could be classified as misinformation. Inevitably, humans misspeak, selectively communicate their information, or choose to frame it in a way that benefits their argument. Scholars and journalists must choose how to best reach an audience with facts; policymakers often selectively choose the facts that support their position. On an average day, we may experience or create rather benign but impactful misinformation by misstating the time or date, accidentally giving wrong directions to a coffee shop, or confusing the names of who was at a party.

Unlike verifiable facts, misleading information is largely in the eye of the beholder. Do you believe that a given statement or report left out important context or facts or overemphasized certain facts? Then to you, it is misleading. But there are an unlimited number of facts and pieces of context that could be brought to bear on each topic and an equally unlimited number of ways to frame, prioritize, and weight those pieces of context. The whole point of discussion, debate, journalism, and critical thinking is to identify which pieces of information and context are most relevant. So, when we find ourselves arguing over misleading content, we are sparring about what details are the most important, what makes a compelling argument, what pieces of context should be omitted as unimportant or irrelevant—in other words, we disagree about vague, subjective, and value-loaded terms.

This is not to say that there is no difference in quality between media organizations or thinkers or that we are unable to distinguish between entirely misleading content and content that omits minor or superfluous details. The core issue here is that as an area of law and study, the adoption of the broader misinformation definition fundamentally undermines the usefulness of misinformation as a term. When political leaders or researchers decry the spread of misinformation and claim to be able to scientifically identify it through its “DNA” or “fingerprints” and show how impactful it is, they are usually worrying about inherently subjective topics that are incompatible with objective study.15 They are often decrying political or social stances with which they disagree while presenting their own views or research as definitive and scientific.

And a final element that some definitions try to make explicit is whether or not misinformation is about truth and falsity only or whether the information causes harm. Most definitions of disinformation and malinformation explicitly refer to the fact that such information is used to cause harm. This makes sense, as disinformation, as mentioned earlier, is purposely false information designed to accomplish some malicious purpose. Similarly, definitions of malinformation are often distinguished from misinformation by saying that malinformation is not clearly false and may even be true but is used for malicious reasons. On the other hand, the definitions of misinformation are not so uniform with regard to whether or not it causes harm.

Regardless of whether the definition explicitly includes a requirement that information result in harm, the focus on harm in some definitions reflects the reality that the concern over misinformation is fundamentally driven by the concern over the harmful impact it may have. The reason that governments, NGOs, academics, activists, fact-checkers, and others are so concerned with the various types of MDM is not merely that some information may be false or misleading, but that they believe it is often harmful. It is worth taking a closer look, then, at exactly what the impact of misinformation is.

The Impact of Misinformation

There are a variety of beliefs and assertions regarding the power of misinformation to have significant impact on the actions and beliefs of many in society.

Perhaps the most prominent assertion of misinformation’s power is in contentions that Russian disinformation handed the presidency to Donald Trump in 2016. If true, the contention is that another nation could spend a relatively small sum of money to create and amplify false, misleading, or otherwise foreign-aligned information and change the trajectory of a democratic country. Other assertions of the power of misinformation include claims that COVID-19 misinformation is widespread on social media and leads large numbers of people to harm themselves by doing things like drinking bleach. Another account of successful misinformation is the #Pizzagate conspiracy that led an armed man to show up to a Washington, DC, pizza shop with loose ties to Democratic politicians because he believed online rumors that the shop was trafficking children for sex.

But these narratives assert and rely on several assumptions: that misinformation is ubiquitous and viral online, that people are gullible, and that misinformation is powerful enough to change people’s behavior and make them do what they otherwise would not have done. These assumptions are themselves the topic of significant research, aptly summarized in “Misinformation on Misinformation: Conceptual and Methodological Challenges” by researchers Sacha Altay, Manon Berriche, and Alberto Acerbi. So, with their work as a starting point, it is worth investigating each of these assumptions.

Misinformation’s Unique Online Ubiquity and Virality

The first common assumption about misinformation is that social media in particular and the internet more generally are uniquely fertile environments for the spread of misinformation. The argument goes that since the barriers to posting speech online are so low and it is so easy to access a whole range of content, it is very easy for misinformation to spread and proliferate. But these arguments fall short for several reasons.

First, while social media is convenient to study due to its concentrated data, it is not necessarily representative of the general population or of where misinformation is spread. For example, traditional media can just as easily spread misinformation, but it is a less easily studied area of research. Yellow and tabloid journalism are known for their sensational and exaggerated claims. Historically, yellow journalism was blamed for the push toward the Spanish-American War. Modern tabloids use similar sensational and misleading claims, often regarding celebrities.16

Second, the evidence indicates that individuals spend relatively little of their time online looking for news and do not generally turn to social media for a news source. Instead, the average TV and online media diet is heavily weighted toward non-news media such as entertainment or social connection. But due to the sheer number of online users and content, even if a very small portion of media consumption is misinformation—by some estimates only about 0.15 percent of the average American media diet is fake news and concentrated in a relatively small percentage of users—then the absolute number of online interactions with misinformation is bound to be quite large. The results of such research suggest that the online media atmosphere that a majority of users engage with is not filled with misinformation or false news sites, highlighting instead the counterintuitively limited reach of misinformation online.17

A related assumption is that true news and information is overwhelmed by the speed with which falsehoods spread. The argument goes that false information is more exciting and more likely to go viral than the truth, which might be less sensational, highly caveated, and less controversial. This assumption largely flows from an influential paper that didn’t study true and false information but rather looked at “contested news” that was subjectively fact-checked as either true or false.18 The authors of the study noted the narrow subset of content they were looking at and found that when they looked more broadly, they saw that true news regularly goes incredibly viral. Other studies have found that science-based evidence and fact-checking were often more reshared than—and hyperpartisan news did not get shared more quickly than—mainstream news.19

People Easily Believe What They Read Online, Resulting in Many Holding False Beliefs

The prior assumptions about misinformation—that it is more likely to occur in an online environment and spread more rapidly there—are made even worse if it is true that people are likely to believe whatever they see online. Yet the assumption that people are so gullible does not fit with the evidence. For example, there is a common belief that because someone engages with a piece of information, they believe it’s true. But this isn’t necessarily the case and presents a far too simple view of human behavior. People may engage with misinformation to “socialize, to express skepticism, outrage or anger, to signal group membership, or simply to have a good laugh.”20

Evidence indicates that people are actually fairly skeptical of what they see online and aren’t “passive receptacles of information.”21 Misinformation is not like a virus that involuntarily infects someone; rather, individuals engage with information in various ways. And since trust in the media, especially social media, is low, people use techniques to figure out what is true and false, including looking for different or original sources or otherwise doing their own research into a topic. Furthermore, research shows that those who believe in the gullibility of others (but not themselves) are most likely to view misinformation as a major threat. In that sense, this assumption of the gullibility of others is an attempt to rationalize the fears and concerns of the dominant misinformation narrative.22

A related assumption is that many people have come to believe large amounts of misinformation, and various studies have supposedly confirmed this assumption. However, these studies are generally predicated on some weak indicator of belief. For example, some studies consider the sharing of misinformation on social media as a metric of belief, but further studies reveal that these shares are often reflecting disbelief in the misinformation rather than “tak[ing] it at face value.”23 Other studies point out that sharing misinformation may be used not to indicate genuine belief but to engage in so-called partisan cheerleading.

When studies look at beliefs in misinformation and not just the sharing of it, the certainty of those beliefs appears to be very weak. Such beliefs may be little more than a misperception or a blind guess from an uninformed person. One study reviewed 180 surveys on misinformation and found that over 90 percent of them lacked a clear “I don’t know” or “Not sure” option, pushing people to guess about a common misinformation topic about which they had no serious knowledge.24 And for those who express strong belief in misinformation, research indicates that these beliefs are not deeply or consistently held, suggesting a need for more rigorous study and polling techniques to identify truly held beliefs of misinformation.25

Misinformation Changes People’s Behavior

All the prior assumptions, however, lead up to the ultimate assertion that many people are actually changing their behavior and acting in harmful ways because of their belief in misinformation. This, after all, is the misinformation problem that leads to calls for policy change.

But this key assertion is flawed for at least two major reasons. First, and as noted above, those who believe in misinformation don’t necessarily do so because they saw some new content and were easily misled, but because they already believed in that misinformation or were already predisposed to believe it—a phenomenon also known as congenial misinformation. For example, imagine someone already has a very negative view of a political candidate. When they see a negative claim or assertion about that candidate that fits with what they already think of the candidate, they may just assume the claim is true—so-called “too good to check.” Or consider avid conspiracy theorists, who are already incredibly distrustful of the government. When faced with a major event such as the COVID-19 pandemic, these conspiracy theorists must decide whether to believe official government narratives or to embrace all sorts of potentially dubious claims and theories. It should come as no surprise that such individuals buy into conspiracy theories, not merely because they read about them online, but because they are already predisposed to disbelieve the claims made by authorities. So, in this sense, the misinformation may be believed, but it has not significantly changed the beliefs of the person. Such misinformation is also concentrated within populations that already have strong opinions, further narrowing its impact.26

Secondly, misinformation, even when it does change beliefs, does not clearly cause a significant change in behavior. Take for example the Mandela effect, named after a false belief that South African leader Nelson Mandela died in prison, where people come to believe in false memories. It is also common for people to have false memories from films or books, even iconic ones—such as Darth Vader’s revelation to Luke Skywalker that “No, I am your father” or the spelling of the popular Berenstain Bears children’s book series.27 These are false beliefs, but they rarely cause significant changes in behavior. Or, to return to the conspiracy theorists, the 9/11 truthers believe that the US government was behind the 9/11 attacks in order to justify the expansion of government surveillance powers and engage in wars that would enrich defense contractors. Many such truthers publicly discussed their theory online and even hosted public meetings and conferences—despite believing that their government would go to any length (including killing thousands of Americans) to advance its clandestine agenda.28

The research shows that “in the real world, it is difficult to measure how much attitude change misinformation causes, and it is a daunting task to assess its impact on people’s behavior.”29 Biases, problematic responses, and imperfect or flawed methodologies all make it nearly impossible to show that misinformation is the cause of widespread harmful actions. Whether it be social media algorithms or political campaigns using advertising to drive activists, donations, or voting, the research is clear that generating real impact is very difficult. Belief in the power and impact of misinformation is instead predicated on the previously described view that misinformation spreads like a virus in an infodemic or like a pathogen that is injected into our political bloodstream. However, it’s clear from decades of data that individuals are not mindlessly infected by misinformation but rather are engaged with it in complex ways including research, humor, skepticism, and belief.30

In sum, the dominant narrative around misinformation in the internet age is that it is uniquely widespread, pernicious, deceptive, and powerful. Yet these assertions are largely inaccurate and have contributed to beliefs and policies that are eerily reminiscent of historical restrictions on speech.

Elite Panics over Misinformation, Past and Present

If so much of what people believe about misinformation is wrong, how did those beliefs become so widely accepted? Many politicians, academics, journalists, and other thought leaders consider misinformation one of the greatest threats to modern society, and yet this view is based on vague definitions and poorly backed assumptions. The modern fear of bad information is nothing new, though, and is best understood as a continuation of an age-old, essential human tension between egalitarian freedom and elite authority.

Misinformation Throughout History

In Free Speech: A History from Socrates to Social Media, free speech advocate Jacob Mchangama traces this struggle, beginning with conflicting views of free expression in the ancient cities of Athens and Rome. On one hand, Athens embraced concepts of free expression that empowered all male citizens in both political and broader social speech. Pericles called the Athenian system a “democracy because power is in the hands not of a minority but of the whole people.”31 While these Athenian commitments were not individual rights in our modern sense, they were radically egalitarian for their time.

On the other hand, we have the ancient Roman approach to expression. Unlike in Athens, Roman citizens did not have a right to speak in the assemblies, where speech was limited to political elites. While certain political rights were granted to Roman citizens, these rights lived in the shadow of the greater rights claimed by Roman elites. Members of the Roman Senate, which dominated Rome’s republican politics and society, were free to speak as they saw fit and did indeed use such speech ruthlessly; for citizens, speaking out of place and against their betters was condemned. This difference between Rome and Athens was described by one of Rome’s most famous statesmen, Cicero, who wrote:

That ancient country [Athens], which once flourished with riches, and power, and glory, fell owing to that one evil, the immoderate liberty and licentiousness of the popular assemblies. When inexperienced men, ignorant and uninstructed in any description of business whatever, took their seats in the theatre, then they undertook inexpedient wars; then they appointed seditious men to the government of the republic; then they banished from the city the citizens who had deserved best of the state.32

As Mchangama notes, the conflict between egalitarian and elitist conceptions of expression “has never fully been resolved” and echoes through history and into our modern debates over expression.33 Indeed, these conflicts are examples of “elite panics”—the response of leading institutions during periods of social upheaval in which these institutions, fearing a panic spreading within the broader society and the resulting civil disorder, justify calls for greater command and control over the general population.34 We can also see this phenomenon of elites and powerful interests responding to significant technological change by warning of and spurring moral panics within society in order to censor or control disruptive ideas or groups.35 New technologies and forms of media that expand the ability of people to speak and consume information inevitably challenge the beliefs of certain groups, who are quick to claim that these technologies are a threat to our society.

Related Media

The threat of misinformation has been called different things throughout history: ignorance, heresy, seditious libel, counterrevolutionary sedition, propaganda, malicious gossip, and many other dangerous-sounding synonyms. And whether it was Plato’s philosopher-king, the Roman Senate, the Catholic Church, monarchs, aristocrats, enlightened thinkers, colonial legislatures, or revolutionary movements, history is littered with elites who worried about “misinformation” upsetting their power. Today, this narrative is usually propagated by politicians, journalists, academics, those in the tech industry, and other educated, powerful elite groups. Based on the history of expression, this should come as no surprise.

The panic today over largely online misinformation, then, is the ideological successor to historical fears about the rapid spread of information causing “vast injury,” or that allowing people to debate social and political issues and think for themselves would result in a loss of “authority and decency,” a “misguided people” being “perswaded to such a horrid Rebellion,” with new technologies misused for “pernicious” reasons and the appointment of “seditious men to the government of the republic.”36 Expression is a powerful tool for change, and such massive changes are rarely without discomfort and unrest. When such power is taken out of the hands of the current elites by the democratizing force of new technologies, it is a tool of the people to determine how to think and govern themselves.

The Conflict over Misinformation Today

Of course, this does not mean there are no real concerns amid the creation and rapid dominance of the internet, social media, and AI. There is a general sense among many Americans that the truth is hard to establish, and misinformation is a big problem. Polls indicate that many Americans believe misinformation is a problem for various reasons. A Kaiser Family Foundation poll found that about 80 percent of Americans believe the spread of false information is a “major problem.”37 Looking at significant issues facing our society, the poll found that 32 percent of Americans were always or mostly uncertain about the accuracy of information concerning the conflict in Gaza and Israel; the same is true for 31 percent of those polled about the accuracy of information on the upcoming 2024 election, and for 27 percent of those polled about COVID-19 issues.38 This is backed up by similar polling from Free Press that found that almost 80 percent of Americans “are concerned that the information they see online is fake, false or a deliberate attempt to confuse people.”39 This poll also found that 47 percent of Americans “often or sometimes encounter news stories they believe contain false information.”40 An AP poll found that 53 percent of Americans are “extremely or very concerned about news organizations reporting inaccurate information.”41 Misinformation, as well as the concerns about it, can breed uncertainty and broad skepticism that may contribute to a loss of trust in institutions, which is at or near all-time lows according to Gallup polling.42

On top of these broad worries about misinformation are bad actors that are purposefully trying to spread disinformation and cause harm. There are online trolls who for money, personal satisfaction, politics, or other reasons simply want to spread information they know to be false and harmful. Similarly, there are bot networks that are frequently used for fraudulent reasons. But perhaps the most commonly discussed bad actors are the hostile nations that seek to spread discord and influence our politics.43 A great deal has been made of Russian efforts to influence the 2016 US elections, with accusations centering on claims that the hacking of Trump’s adversaries and Russian social media ads and posts helped elect President Trump.44 And it is true that these efforts continue, with two Russia Today employees being indicted in 2024 for funneling almost $10 million into right-wing media.45 On the other side of the political aisle, Iranian operatives were charged with hacking the Trump campaign in an attempt to “weaponize” the stolen campaign material by feeding it to his political opponents.46

But while people may be sincerely concerned and there are certainly real cases of MDM being spread, the threat is overstated, in both abstract (as discussed earlier) and practical terms. To return to the foreign disinformation operations, ultimately Russia’s 2016 efforts were minimally impactful. By various estimations, Russia is suspected to have spent around $150,000 through various Facebook channels in the lead-up to the 2016 election.47 Facebook’s investigation into the Russia operations found that 126 million people may have come into contact with Russian posts or ads, and an estimated 32 million Twitter users may have seen Russian propaganda.48 But such statistics overstate the actual influence of such Russian operations. Research into the Russian operations on Twitter has found that the Russian-backed content was shown to an incredibly concentrated group who were heavily Republican voters, and—in keeping with broader research discussed earlier finding that contact with misinformation does not clearly lead to changes in behavior—there is “no evidence” that those who were exposed to Russian influence operations changed their “attitudes, polarization, or voting behavior.”49 And in comparison to the approximately $81 million the Hillary Clinton and Trump campaigns spent on Facebook ads, such spending is a mere drop in the bucket. Similarly, in 2024, the Kamala Harris and Trump campaigns spent $130 million on Facebook and Instagram ads in the three months leading up to the election, a small percentage of the estimated $10.5 billion that was spent on all political ads that year.50 And as has consistently happened throughout history, the concerns and attention paid to the Russian information narratives were driven by intellectual and political elites.

The supposed threat of misinformation led to significant counter-misinformation efforts. One cautious analysis found that the US government spent $267 million on misinformation-focused grants between fiscal year (FY) 2021 and FY 2024.51 Other grants looked at related terms such as disinformation, fake news, and infodemic. Looking more closely at the universe of misinformation-related grants, we can see significant and troubling examples of the government’s influence or interference in the way we determine what is true and what is false on topics ranging from important social speech to niche interests.

  • The National Science Foundation (NSF) gave Meedan, a US nonprofit, about $5.75 million for its Co-Insights project to develop tools to counter “common misinformation narratives,” including “fearmongering and anti-Black narratives; undermining trust in mainstream media; glorifying vigilantism; and weakening political participation.” The NSF and Meedan also discussed how Co-Insights’ tools could be used for content moderation of misinformation by other tech companies.52
  • The NSF also gave George Washington University a grant to study how “populist politicians distorted COVID-19 pandemic health communication,” leading to various harms.53 One of the academics who conducted the study concluded that “it’s obvious that governments should allow the experts to have the main say in how society should respond to [a] public health crisis.”54
  • Another NSF grant was given to Kent State University to study purveyors of misinformation that switched “between spreading medical misinformation to spreading political misinformation.” This study would use a machine-learning tool to identify and label those who were “potential agents of disinformation.”55
  • The Department of Health and Human Services gave a grant to Georgia Tech to study the nexus between COVID-19 misinformation, anti-Asian and anti-black hate speech, and violence. Partnering with the Anti-Defamation League, the study examined not just violence but also “online aggression” and “violence-provoking” misinformation.56
  • The Department of Commerce funded a program to educate consumers about “the sustainable management of Bering Sea Crab Fisheries.… It will combat misinformation that negatively impacts public perception of crabbing and the commercial fishing industry as a whole.”57

To be clear, private organizations should be free to research these topics if they choose. Government funding specifically focused on misinformation, though, will inevitably advance a biased approach that often seeks to discredit ideologically opposed Americans or even support the silencing of their speech.

Beyond specific government grants, broader efforts to counter misinformation are also often counterproductive or harmful. The COVID-19 pandemic was the source of many pieces of misinformation, but as described earlier, there were many claims that were not demonstrably true or false but that were considered misleading and dangerous by some. Unfortunately, the response of large segments of the knowledge industry and government was to treat many potentially misleading claims as clearly false and worthy of suppression. The most obvious example is the COVID-19 lab-leak theory, which the government and world health officials sought to “take down” or discredit, with experts and journalists openly calling it a “fringe” conspiracy theory, “impossible,” or even related to prejudice or racism, while some social media companies moderated or even banned discussion of the theory.58 Of course, the source of the severe acute respiratory syndrome coronavirus 2—the virus that causes COVID-19—is now an open question, but not before the elite impulse to suppress and reject a disfavored view had formed a widely held narrative that ultimately dealt significant damage to key public health and knowledge institutions when credible information that conflicted with that narrative arose. Add in other missed claims and policies—such as that the COVID-19 vaccine would prevent you from getting COVID-19 or that locking down schools was unquestionably worth the cost imposed on families and children—and our leading institutions lost the trust of the American people rather than actually improving knowledge or saving lives.

Unfortunately, there are plenty of examples of government agencies using their power to target the political speech of their opponents because it was supposedly misinformation. And both major parties have shown that they are not immune to the siren call of censorship in the name of stopping so-called false information.

In 2024, Florida held a ballot initiative about whether or not abortion should be considered a right in the state’s constitution. A pro–abortion rights group, Floridians Defending Freedom, created several ads to promote this ballot question, including an ad that claimed that without abortion rights, a woman’s life would be at risk should she not be allowed to have an abortion. In response, Florida’s Department of Health threatened local TV stations to not run the ads, arguing that they were a “‘sanitary nuisance’” and that such “false advertisements … would likely have a detrimental effect on the lives and health of pregnant women in Florida.”59 A judge quashed this argument, finding that “[t]he government cannot excuse its indirect censorship of political speech simply by declaring the disfavored speech is ‘false.’”60 He then cited another ruling that held “‘[t]he very purpose of the First Amendment is to foreclose public authority from assuming a guardianship of the public mind through regulating the press, speech, and religion’” and added, “To keep it simple for the State of Florida: it’s the First Amendment, stupid.”61

On the other side of the political aisle, California’s AB 2839 took aim at prohibiting deceptive or manipulated media such as those generated by AI technologies.62 One of the events precipitating California Governor Gavin Newsom’s push for this legislation was a viral satirical deepfake of Kamala Harris’s voice in which she seemingly mocks herself, saying she is the “ultimate diversity hire,” a “deep state puppet” hiding her “total incompetence.”63 California justified its censorship by arguing that “deepfakes threaten the integrity of our elections.”64 However, a federal judge enjoined enforcement of AB 2839 pending resolution of a lawsuit challenging its constitutionality, finding that “[t]he political context is one such setting that would be especially ‘perilous’ for the government to be an arbiter of truth in. AB 2839 attempts to sterilize electoral content and would ‘open … the door for the state to use its power for political ends.’”65 The judge concluded, “most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”66

And so, the focus on countering misinformation can be seen charitably as a good-faith attempt to mitigate the side effects that come along with speech unleashed by new disruptive technologies, especially when dealing with malign foreign actors. But in practice, these efforts carry on the spirit of the Inquisition, the Star Chamber, and countless other historical measures taken by those in power to respond to the threat posed by new expression and new expressive technologies. Ultimately, the intent and severity of different efforts to counter misinformation differ, but inevitably, and as was the case historically, such efforts are often effectively in service of power and politics rather than freedom and truth. While censorial appetites and methods in modern democracies are thankfully less brutal and extreme than ancient and authoritarian techniques, the impetus for social and political control is the same, even if those calling for the suppression of speech genuinely believe they are helping and protecting democratic society.

A Better Way to Knowledge and Truth

We as individuals and as a society need a better system for sorting through information and ideas to identify those that are the most truthful and valuable.

For as much as the modern misinformation narrative presents an elitist millennia-old view of the dangers of expression, it is also true that our society is grappling with the challenges of how to know what is true, false, or muddied and contested. History has also shown us that new expressive technologies are disruptive and often result in major changes to the way we communicate, learn, and engage with the world around us.

Knowledge Through Liberal Inquiry

The problem of truth is as old as mankind. Countless cultures, religions, and thinkers have wrestled with this problem and how to manage conflicts of beliefs. Humans are bound to disagree about what is true, good, noble, and just. In his prescient book Kindly Inquisitors: The New Attacks on Free Thought, originally published in 1993, Jonathan Rauch describes the critical and liberal approach to developing and understanding truth. “Liberalism,” he writes, “holds that knowledge comes only from a public process of critical exchange, in which the wise and unwise alike participate.”67 Recounting the history of the Greeks, the medieval church and its Protestants, and the Enlightenment, Rauch identifies what he calls “two of the most successful social conventions which the human species has evolved”:

  • “No one gets the final say: you may claim that a statement is established as knowledge only if it can be debunked, in principle, and only insofar as it withstands attempts to debunk it;
  • “No one has personal authority: you may claim that a statement has been established as knowledge only insofar as the method used to check it gives the same result regardless of the identity of the checker, and regardless of the source of the statement.”68

These two rules define the system of “liberal science” or “liberal inquiry.” Any serious area of scientific research abides by these principles. Claims that a treatment cures cancer are subject to testing and challenge. It doesn’t matter if a scientist is the world’s most renowned researcher on a topic if their results, when tested by other scientists, turn out to be wrong. Similarly, if the researcher claims to have the cure for cancer but won’t allow anyone to independently verify their results, or claims that contradictory results by their scientific rivals should be ignored because only they have access to the truth, then they aren’t a scientist but tantamount to a faith healer or charlatan. Liberal inquiry allows you to continue to believe in those things that have been disproven or can’t be checked, but then that is the realm of faith and opinion rather than truth and knowledge.

Critically, this approach also applies outside the scientific sphere. Any piece of knowledge, whether it is the efficacy of a pharmaceutical or the efficacy of a policy of drug criminalization, is subject to challenge by anyone. Statements of fact, policy prescriptions, and any other contentions or claims can and should be open to criticism, debate, and discussion. By insisting that anyone—the wise, the foolish, the expert, the novice—can be wrong and that differing ideas are subject to challenge, knowledge is created by a “rolling critical consensus of a decentralized community of checkers.”69 This decentralized approach, concludes Rauch, has laid “the groundwork for a social system whose ability to energize and organize human creativity has never been surpassed.”70

Liberalism Versus Authoritarianism

It should be apparent that the elitist view of speech, the moral panic surrounding internet-based communication, and the misinformation narrative are largely incompatible with the decentralized vision of liberal inquiry. The idea that free expression and debate over issues of public importance should be reserved for our political and social betters is anathema to the idea that anyone may challenge established knowledge; the view that greater debate and discussion of social and political issues by more people is dangerous and harmful to truth is directly opposed to the view that no one has a final say but that everything should be open to criticism and challenge. Indeed, a system that enshrines certain classes of experts as the arbiters of truth in order to stop or stifle further discussion follows the same principle as Plato’s totalitarian philosopher-king or the medieval dependence on the authority of the pope.

The debate over misinformation, then, offers a stark choice between two directions for our society: liberalism or authoritarianism. Liberalism offers an expansive vision of society with laws and culture norms that are as supportive of free expression as possible. Universities, media, and other knowledge-producing enterprises are skeptical and inquisitive, valuing diverse criticism and arguments to produce, refine, or even reject knowledge. Fact-checking is not a unique discipline or responsibility of certain experts but part and parcel of what it means to engage in debate and discussion, as every assertion should be checked and challenged by everyone.71 Knowledge is developed through constant challenge and debate in forums ranging from private conversations one has with neighbors to public political debates and contests. Only those ideas that survive scrutiny from a broad set of viewpoints are widely accepted as knowledge rather than opinion or belief. Liberal society grants the average person, the academic, the activist, the journalist, the political leader the same authority over and access to the truth, as well as the ability to be flawed and biased. Experts have studied various topics and have developed skills that are valuable to society, but only insofar as their expertise is subject to challenge by others in this liberal system. Appeals to authority that cannot survive such scrutiny are actively rejected and frowned upon. Thus, debates over contentious issues are settled not through restricted speech but through more speech. And new expressive technologies and greater political freedoms are not feared but embraced—as empowering more expression, debate, and scrutiny regarding the decisions being made by our governments and in our societies.

The alternative is the authoritarian view that norms and laws should limit free expression so that the “right information”—as defined by one political or ideological viewpoint—prevails. In an authoritarian society, we would need experts to tell us when speech is misleading or misinforming. Those experts, we’d be told, are not biased but possess a unique understanding of the truth and the consequences of allowing misinformation to spread. Rather than welcoming ongoing challenge and critique, alternative views would be suppressed and canceled, as they question the truth and spread more misinformation. Technologies and advancements that expand expression would be considered worrisome because of the harms they would unleash by empowering the ignorant masses and troublesome challengers to authorities and established “truth.”

In sum, liberalism offers the principles of broad, egalitarian speech rights; debate and critical inquiry that identifies knowledge and ideas best able to withstand criticism and yield order without authority; and further innovation and progress. Authoritarianism, on the other hand, offers limited or stratified rights to expression; power and identity as sources of truth; and order and safety through blind obedience to supposed experts. The choice is clear, and the strength of liberal inquiry is more than capable of addressing the challenge of misinformation only if we as a society—including citizens, experts, markets, and government—choose to embrace it. Based on this paper’s discussion, the following recommendations might be useful for various companies or policymakers, when appropriate, to address concerns about MDM.

Best Practices for Technology Companies

Many technology companies find themselves stuck amid several competing pressures. From one direction, they face significant academic, activist, and government pressure to crack down on misinformation, an employee base and trust-and-safety community that is generally ideologically sympathetic to these demands, and the need to provide a product that attracts users and advertisers by limiting the actual or perceived amount of misinformation. From the other direction, they face a significant backlash to actual or perceived censorship and suppression of speech, and punishing users for misinformation may also drive away users and businesses.

As such, different companies with various products will make different decisions about how to navigate questions of misinformation on the platforms and in their products. But some principles stand out.

First, companies should consider ways to build trust rather than increase skepticism. There are a variety of misinformation interventions, starting with the most notable examples of removing false content, fact-checking, and adding informational labels. These efforts try to “debunk” false information by telling users that content may be or is false or misleading. If done correctly, such an approach can potentially help users avoid sharing viral false information and hold users, including elites, accountable. But this approach also faces significant challenges. As this paper has discussed, what is considered misleading is generally subjective, and users may disagree with decisions made by tech companies and feel that they lack the ability to appeal or argue against these decisions.72 And in a low-trust society like ours, fact-checking is further undermined as an effective tool because users fundamentally do not trust the technology companies or the fact-checkers they rely on to be accurate and unbiased. Furthermore, operating such systems on a wide scale is another serious challenge given the number of claims made online.

Some companies have used other interventions that, rather than make users directly aware of the falsity of a claim, instead seek to make users more generally vigilant about the information they see and consume; these interventions include accuracy nudges, gamified “pre-bunking,” and increases in digital literacy. Since these approaches do not make judgments on the accuracy or misleadingness of a given claim, they avoid the accusations of biased fact-checking. Such efforts can support better information habits, either by motivating users to be on the lookout for misinformation or to improve their ability to discern what misinformation is. But without an emphasis on intellectual humility, such interventions can also turn healthy vigilance into an unhealthy skepticism toward everything, including trustworthy sources and objective truths. Such skepticism is correlated with a greater tendency to engage with misinformation.73

A second important principle for technology companies to consider is how to implement bottom-up debates in place of top-down expertise and orthodoxy. The power and symbolic capital of the elites is often so divisive and mistrusted that their efforts to dictate truth by citing their own authority are not only counter to the ideal of liberal inquiry, they may also be counterproductive. Expertise is valuable, but not when it is viewed as favoring one political ideology and insulating itself from critique by citing its own expertise.74 Truth claims must be proved through rigorous checking by various perspectives. And it is far more appropriate and helpful to counter misinformation through engagement with a diverse segment of users to find, test, and provide true and helpful information. This is the general approach taken by X’s Community Notes program and Meta’s adoption of that system, Reddit’s up-vote and down-vote mechanisms, and Wikipedia’s process of contributor debate.75

A third principle for companies is to empower user choice. Users have many different perspectives and levels of tolerance. Some want to engage with fringe or controversial beliefs, others want an online environment completely free from anything they view as harmful misinformation, and others still may be open to only certain types of controversial content. Companies can choose to create their products in such a way that they cater to only one type of user, or they can ensure greater user choice. Some forms of social media are inherently based on user choice, such as the decentralized Bluesky or Nostr, while others such as Reddit give users the ability to opt into groups with different tolerances for potential misinformation. Meta gave users greater choice within Instagram about whether to see more or less sensitive or political content, as well as how to treat fact-checked content, and is allowing Threads users significant choice and control over the curation of their feeds. Users, platforms, and advertisers can all benefit from giving people greater choice and control over their experiences online.

Finally, the market will ultimately determine who succeeds. Users want a better experience online, whether it be greater privacy, more control over their experience, more or less removal of content, and so on. The technology industry as we know it is only a few decades old and has undergone significant turnover. Yes, there are leading titans in Meta, Google, Apple, and beyond, but in the early 2000s, AOL, Nokia, Yahoo, and Myspace were the biggest players. As customer demands and technologies change, the most powerful players have no choice but to respond or also become obsolete.

Limiting interventions to focus on building intellectual humility, using decentralized checking to provide users with trustworthy information, and empowering users with greater control and choice are likely to support healthy vigilance, build trust, and create positive user experiences that are attractive features to many users and advertisers.

Public Policy

For policymakers, there are legitimate policy concerns as well as self-serving political incentives at play around misinformation. Policymakers may sincerely be worried about the harm that misinformation may be doing to society, even if the purported harms are wrongly understood or overstated. But on top of the policy concerns, there are political incentives for a political party to declare its opposition as liars and purveyors of dangerous misinformation. And taken to its extreme, each party has an incentive to use the power of the state to try to silence its opponents. While the First Amendment largely prevents suppression of speech, this has not stopped policymakers from attacking oppositional speech in ways that are constitutionally dubious or secretive.

As such, there is a need to establish significant restrictions on what the government does to counter misinformation and bolster the protections offered by the First Amendment. Fundamentally, the policy of the US government should be maximally protective of speech and hostile toward any policies or practices that view speech as harmful to our democracy or ability to determine what is truthful.

To start, policymakers should defund government support of misinformation research and resources. The different forms of MDM can be harmful, but their impact and prevalence are generally misunderstood and overstated. Furthermore, the definitions of misinformation are often vague and invite subjective perspectives on what is or is not false or misleading. As such, government funding and support of organizations and research designed to counter misinformation is inherently picking a side in what are often political and ideological debates over what is true and good or false and harmful. Funding activities to suppress protected expression may be unconstitutional and illegal, as “a government official cannot do indirectly what she is barred from doing directly.”76 But even if it’s not that explicit, the spending of taxpayer dollars to increase the concern around misinformation or generally support the suppression of First Amendment–protected speech should not be tolerated. Private actors can and should be left to research whatever MDM issues they wish, but not with government support and money. Of course, government officials can use their own voices to express what they believe, but the public would benefit from their doing so in a more judicious and nuanced way than what we have often witnessed.

Therefore, Congress should generally prohibit contracts and grants supporting counter-MDM research, rating, labeling, reporting, or other similar programs. As discussed earlier, there have been many misinformation-focused grants from the US government. The State Department’s Global Engagement Center was recently closed by Congress for supporting such censorious counter-MDM efforts through its grants. The National Science Foundation, especially its Track F project, also has a record of significantly funding counter-MDM activities, and so this program similarly deserves elimination of funding to prevent future abuse. But as noted earlier, these grants extend beyond just a few select grant programs and can be found across the government.

In addition to cutting funding for MDM research, policymakers should also limit the impact of the government’s efforts to counter foreign disinformation on American speech. The issue of foreign disinformation is a complex one. Our government has a legitimate interest in addressing deliberate and malicious efforts by foreign governments to spread lies and disinformation, especially when they seek to interfere with or influence our elections. But, as noted above, it must do so carefully. Americans have the right to consume foreign propaganda and to hold beliefs that either directly or indirectly align with those of a foreign government. Prominent counter-disinformation efforts and experts have blurred the line between unlawful foreign malign influence and lawful speech consumed or spoken by Americans. And our government’s and media’s responses to allegations of Trump’s collusion with Russia in the 2016 election, as well as to allegations regarding the provenance of the Hunter Biden laptop in the 2020 election, similarly present a significant failure to distinguish between foreign efforts to engage in political manipulation and the speech or views of Americans. In a more recent example, the Global Engagement Center attributed comments made by Indiana Republican Jim Banks—then a member of the House of Representatives, now a senator—as being addressed to a Russian state media outlet rather than originating within the US press.77

The failure of experts in both the private sector and government to clearly articulate the difference between foreign information operations and domestic speech has led to the perception that elite public and private institutions are suppressing speech under the guise of national security. And while failures by the media or technology companies can be handled by the market, with consumers going elsewhere for their news or online platforms, when the government fails to properly identify foreign influence operations, it can suppress Americans’ protected speech or effectively label them as disloyal supporters of foreign adversaries. The public has come to appreciate those risks, resulting in a major backlash to government disinformation efforts. Perhaps the most notable example was the Department of Homeland Security’s Disinformation Governance Board, which was supposed to counter foreign disinformation but was put on hold within one month of its formal announcement because of public criticism and accusations of censorship. Even if the board was meant to have a legitimate purpose, the subjectivity of misinformation, concerns about bias, and the blurred line often drawn between foreign disinformation and American speech led to the board’s rapid shuttering.

An important question for policymakers, then, is whether the harms of foreign disinformation are worse than the harms done to our society by our government’s counter-disinformation efforts. Over the past decade, foreign efforts to meddle in and sway our politics appear to have had little impact on voters, while our response has sown significant discord, degraded and polarized trust in government, and facilitated the spread of all sorts of additional misinformation from both parties.78 A frank assessment of the United States’ counter-disinformation efforts must conclude that the cure has proven worse than the disease. Policymakers should significantly restrict the funding and operation of counter-disinformation efforts to only the most dangerous and blatant actions taken by foreign adversaries, marshaling evidence in a way that educates rather than silences Americans.

Another area where government can abuse its powers under the guise of countering MDM is the problem of jawboning. By threatening, coercing, or inducing a private actor to silence others’ speech, the government has attempted to engage in censorship via proxy. For example, the Supreme Court most recently decided in National Rifle Association of America v. Vullo that a New York financial regulator violated the expressive rights of the National Rifle Association (NRA) by inducing the NRA’s insurance providers to cease providing their services to the NRA on account of what the regulator viewed as the NRA’s objectionable and harmful views. But in a related case, Murthy v. Missouri, a host of different government communications with social media companies ranging from mere FYIs to aggressive berating were not stopped by the Court, as they found the defendants lacked clear standing since they could not prove that the government’s actions directly caused the suppression of their views by a social media company. While troubling for a variety of reasons, the Court set a very high standard for who could claim that jawboning had occurred.

This case leaves policymakers with two approaches to dealing with jawboning. The first is a transparency-based approach that would require all government employees to report any requests they made of a private service provider to suppress or stop providing their services (or conversely, requests of companies to reinstate or continue providing services) to someone else based on First Amendment–protected speech. Regardless of whether such communications are coercive, aggressive, persuasive, or merely informative, the record of these communications would be compiled by the government and available for disclosure in line with the Freedom of Information Act and the Privacy Act. This reporting requirement would better allow the jawboned companies and the victims to prove when the government was punishing them for their speech. Sunlight is also a powerful disinfectant and will immediately discourage government agents from pressuring private actors to censor via proxy.79

The second approach is to prohibit government actors from engaging in certain kinds of communications with technology companies. This approach seems intuitive—after all, if we don’t want censorship by proxy, just prohibit the government from doing it. The problem is that prohibition laws are hard to write perfectly. If a bill prohibits nearly all government communications with private actors, then an FYI about objectively harmful content to private companies could be interpreted as an attempt to influence content moderation. It could prohibit a government agency from asking for a company to delay publishing certain material because it could undermine an existing criminal investigation or put an undercover agent’s life at risk. It could even prevent the government from answering questions that the technology companies want to ask of their government. On the other hand, if the prohibition is written too loosely or with too many exceptions, then the government could justify its actions as not being overtly censorial. For example, a broad public safety exception would basically have allowed government jawboning on issues of COVID-19. However, one potential strength of this approach is that it could grant standing to individuals who, under the current precedent set by Murthy, do not have legal standing.80 There may be an approach here that strikes a reasonable balance, but a transparency-and-reporting regime is likely still necessary to even know when potential government jawboning is occurring.

More broadly than specific acts of jawboning or research funding, policymakers must resist using misinformation as a justification for all sorts of regulation, as has often happened both at home and abroad. The EU’s Digital Services Act (DSA) is often praised as a meaningful and procedurally fair regulation of technology companies, even though it has quickly been weaponized to threaten social media companies for hosting supposed disinformation.81

Fears over AI are also largely due to the concern that AI tools will create or manipulate text, imagery, and audio to confuse or mislead others. As such, there have been many legislative proposals to address these concerns. Some laws—California’s AB 2839, to name one example—have attempted to ban or prohibit speech based on the government’s view of what kinds of AI-powered speech are too harmful or misleading to allow. Other laws demand algorithmic audits that will supposedly ensure fairness or safety. Just as with the DSA, though, the inherently subjective nature of misinformation means that some viewpoints will be classified as threats.82

Misinformation is also invoked in multiple other tech debates, especially those regarding children. The leading proposal in this arena is the Kids Online Safety Act (KOSA), which would create a duty of care that holds tech companies responsible for stopping any harmful content on a wide range of issues, including suicide and self-harm, sexual topics, and bullying. These issues, which of course may be inappropriate for minors, are complex and often subjective. Should content that talks about self-harm in order to discourage it be removed? Should content that praises LGBTQ identities be removed for making contested claims about sex and gender? Should content that condemns LGBTQ identities be removed for spreading misinformation that is harmful to children’s mental health? Should content that is alarmist about climate change be removed for feeding anxiety and depression? No matter which side of an issue you are on, KOSA-like bills would empower the government to censor those viewpoints as harmful misinformation.

Before we give in to these various panics and grant the government greater control over powerful speech-enhancing technologies, policymakers should remember that their political opponents will be able to twist such policies against them and that research indicates the threat of misinformation is overblown, following the historical pattern of misplaced panics over new technologies.

In place of these various censorious regulatory proposals, policymakers would be better off working to build and demonstrate trust, humility, and literacy. Trust in the government has been at or near record lows for the past decade. Both major political parties believe that government power can be wielded against speech they dislike to protect society from its worst impulses. Yet it is this top-down belief in controlling speech that is itself one of humanity’s worst tendencies.

Policymakers should as a rule pursue policies that trust the American people to use technology and to speak, discuss, criticize, and do what is necessary to develop knowledge through liberal inquiry. Free individuals and civil society, not the government, will decide what is true, false, misleading, or persuasive.

Conclusion

Experts, politicians, aristocrats, kings, religious inquisitors, and the rulers of ancient empires all feared misinformation, especially when technological or political change made it easier for more people to express themselves freely. Those worried about false and harmful information today may honestly believe that, unlike the despots and tyrants of history, they are doing what is best for society by suppressing disfavored expression. But this top-down imposition of “the truth” is neither good for individuals nor effective at generating knowledge and improvement in a society. The current conversation about misinformation follows the same pattern of elite panic that has been seen throughout history, rejecting the liberal system that created the modern world as we know it today in favor of mankind’s age-old desire for authority. Some of this panic is driven by real but misunderstood concerns about misinformation, while some likely has been driven by the desire to advance political or ideological power. Regardless, the answer to misinformation starts by rejecting the urge to give government more power over speech and new speech technologies. Instead, policymakers should pursue policies that further protect speech, thus empowering individuals and civil society in the pursuit of truth, prosperity, and progress.

Citation

Inserra, David. “The Misleading Panic over Misinformation: And Why Government Solutions Won’t Work,” Policy Analysis no. 999, Cato Institute, Washington, DC, June 26, 2025.