Skip to main content

Good News and Bad News about COVID-19 Misinformation

The good news is that people don’t necessarily believe it. The bad news is that they don’t necessarily believe valid information about the pandemic either

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Recently, a video called “Plandemic” went viral on social media. PolitiFact flagged eight fake or misleading claims it made about COVID-19. YouTube and Facebook removed the video; Twitter issued “unsafe” warnings and blocked relevant hashtags. All of the platforms couched their content moderation decisions in terms of the generic “violations of community standards” language, and expressed concerns that the video could cause “imminent harm” as Facebook put it.

Such swift, draconian decisions assume not just that the content is ricocheting around the internet—indeed, data on trending and sharing easily corroborate that—but that people remember and believe it. Do they?

To understand both the reach and impact of COVID-19 misinformation, we carried out a pair of surveys examining three prominent categories of fake claims in the media: headlines concerning treatments for the disease; the origins of the virus; and government response to it.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


In the first study, we asked Americans if they remembered seeing a sample of prominent fake news claims about COVID-19. On average, about a third reported having read the claims. But there are reasons to be skeptical. Just over a quarter also claimed to have read or seen a set of headlines that we invented and that did not appear widely on social media. This suggests that the true recall of misinformation is much lower, less than 10 percent.

This figure is perhaps surprisingly modest given the proliferation of fake news about the coronavirus on social media. However, it is significantly higher than previous estimates of the uptake of political misinformation in the 2016 election.

Given the prevalence of misinformation, we were also curious whether Americans are keen arbiters of truth. On average, between 20 percent and 25 percent of respondents judged fake claims to be true. Perhaps most dangerous, almost one in five incorrectly believed fake claims about effective treatments for COVID-19.

Our second survey used an experiment to investigate whether corrections to fake news work. As the opening vignette suggests, social media platforms have responded to the proliferation of COVID misinformation by taking aggressive actions to moderate—removing and in some cases correcting—objectionable content.

Evidence suggests that in some contexts corrections can backfire. Fake news labels have been associated with spikes in traffic as people seek glimpses of the taboo media, and could further entrench popular misperceptions. Facebook has consistently tweaked their fake news warning because they have found that it can actually drive more traffic to and belief in fake news. The efforts to bring down “Plandemic” had this very effect, increasing its cachet and making it go viral, outcomes counterproductive to the goal of undermining exposure.

We found little evidence that flagging fake headlines as false routinely generate backfire effects. However, their efficacy was highly variable. The most effective correction—calling out a headline erroneously claiming that the U.S. has the highest coronavirus death rate in the industrialized world—decreased belief in the claim by more than 15 percent. However, two thirds of respondents continued to believe the erroneous claim.

Just as alarming as the extent and persistence of misinformation was the share of individuals across both surveys who do not believe true content. Respondents were particularly ill-equipped to identify factual information about treatments for the virus as true, with almost 60 percent of the public either identifying the true information as false or saying they were not sure.  

Believing incorrect information and not believing correct information are both problems for democracy, but for different reasons. Even if small fractions of the public believe fake news, the consequences can be pernicious. Fringe ideas and voices can gain outsized influence because of how they’re amplified online or in traditional outlets such as television. In the 2016 election, for example, one fake conspiracy about Hillary Clinton running a pedophile ring out of a pizza place in Washington, D.C., led to a gunman entering the joint to mete out justice. Fake news does not need to meet some magical majority threshold for it to have harmful consequences.

Large-scale public failure to accept the accuracy of factual information is perhaps even more problematic. On questions of what treatments work or do not, the stakes are literally life and death. But the consequences of Americans’ inability to discern the true signal from the noise of competing claims in an oversaturated media environment are much broader.

Outside actors such as the Russians have defined success not by changing attitudes or causing Americans to believe things that are untrue, but by creating a sense of cacophony, complete cognitive dissonance, so that Americans are loath to believe anything at all. The virtue of incredulity, at least from this perspective, is that it creates insurmountable barriers to sound governance.

Effective policy responses to the pandemic require at least some degree of buy-in from the public, or an ability to mobilize the public to support government measures. This depends, in large part, on Americans being able to distinguish fact from fiction. On this metric, the public struggles mightily.

The combined credulity toward fakeness and incredulity toward truth raises an important question: Why aren’t Americans more discerning?

Our evidence points to a public that is too polarized, ideologically entrenched and awash in information to believe even true content. Overwhelmingly, we found that the more individuals consumed social media for their news, the less capable they were of sniffing out the differences between real and fake content. The problem may be one of selection effects; perhaps people who turn to social media are more nihilistic. They do not believe in the existence of anything. They are keen to be entertained and less pulled in the direction of capital-t Truth. But the more likely problem is that social media presents a feed or timeline of content that disorients its consumers and unmoors them from reality.

Thus far, the result of COVID-19 misinformation is not that majorities of Americans naively believe wildly false claims. It is that many do not trust anything.

Read more about the coronavirus outbreak from Scientific American here. And read coverage from our international network of magazines here.