Nowadays, a typical news session begins with us logging on to our preferred social media website in the morning to catch up on what we missed while we slept. We quickly scroll past the memes from high-school friends or extended family, rolling our eyes at their attempts to ridicule opposing viewpoints with a few characters’ worth of oversimplified, clickbait logic. After debating whether to deploy the “block content” button, we move on to consuming self-aligned content whose conclusions support our own existing views. We indulge in self-validation for a few more minutes and then log off, baffled as to how anyone could possibly ever think differently than we do.
It’s a script that rose to the national spotlight during the 2016 election and one that appears to have become more and more prevalent as we near the 2018 midterms, especially recently as the nation reels over the Kavanaugh controversy and contentious DACA rulings.
We increasingly rely on social media platforms as mediums to interpret and disseminate political information. Consequently, tech giants such as Facebook and Twitter have come under close scrutiny for the ways they mediate and censor these discussions. In particular, they have been accused of providing a safe house for “fake news”—online content rife with misinformation that can lead to political hyperpolarization.
A recent troubling MIT study revealed that “fake news” diffused significantly farther, faster, deeper and more broadly than the truth, with the effect even more pronounced when regarding political news as opposed to reporting on natural disasters, finance or science.
More worrisome, contrary to the perception that the fake news epidemic is a conception of malicious online news-bots, there is evidence that suggests the public actually craves fake news. The same study as mentioned above found that online bots were equally likely to propagate both false and truthful information, implying that the fake news epidemic exists because humans, not internet bots, are fueling it by favoring misinformation.
Psychologists have speculated that this phenomenon can be explained by humans' inherent need to harmonize their observations with their existing view of the world—a need to avoid what’s known as cognitive dissonance. For example, Mark Whitmore, a psychologist specialized in information psychology, explains “the brain is hardwired to accept, reject, misremember or distort information based on whether it is viewed as accepting of or threatening to existing beliefs.”
As a result, instead of gravitating towards well-balanced, rigorously verified news content, we have developed a diet for self-validating sensationalism, and the social media sector, controlled by a mere handful of for-profit tech firms, is happy to oblige our tastes.
The danger of online misinformation is clear. A recent study revealed that one in four Americans visited a fake news website in the preceding weeks of the 2016 election, with several observers speculating that such new sources played a substantial role in shaping the elections and manipulating voter turnout. In addition, the overabundance of information has caused many to question the integrity of some of our country’s long-established institutions, such as our intelligence agencies and the media, shaking faith in the very bedrocks of our democracy.
So how can we incentivize individuals to seek accurate online content? Leading scholars are actively grappling with this question because there is still much we don’t understand about the “fake news” phenomenon, but many have suggested the problem’s complexity stems from the strong counterincentive against rejecting fake information.
Processing and internalizing new information requires a considerable mental effort, especially when that information conflicts with your existing worldview. You need to assess the information, consider if it is consistent with your beliefs, and if not, restructure your belief system to accommodate the new observations. It takes vulnerability and the willingness to admit you may be wrong. In short, it takes work.
Alternatively, one could elect simply to dismiss information that isn’t consistent with your existing worldview and accept the information that is. With an overwhelming amount of conflicting information available, who’s to say what’s actually true and what’s false? If you can’t tell, why not just make life easy and go with what supports your current beliefs?
So what options do we have? Many suggest that addressing the issue by reforming adult behavior is aiming too far from the source. An alternative solution is using early education to help individuals recognize these psychological pitfalls and apply critical thinking to the information they consume. Currently, there is a push in the U.S. to incorporate internet information classes into primary and secondary school curriculum. The movement, which has received bipartisan support, aims to make fact-checking seem like second nature to individuals at an early age so they will be less vulnerable to agenda-driven information sources throughout their life. While the data are still streaming in, one recent study involving 15–27 year olds revealed that media-literacy training made individuals less likely to believe a demonstrably false claim, even when the statement was aligned with their existing political point of view.
Legislation centered on this idea has already been passed in states such as Washington, Connecticut, Rhode Island and New Mexico, with some now offering media literacy courses as electives at select public schools.
Primary and secondary school are supposed to be supplying students with the skills they need to develop into productive, informed members of our society. As our society evolves, the curriculum we are teaching our students need to evolve as well. Schools have partly recognized this by incorporating programming and computer science classes into curricula to match our tech-hungry society, but we need to go further and also instruct our youth on how to properly consume and disseminate online information.
We need to have a lot of difficult conversations in order to resolve the issues we are facing as a society, and the only way these conversations will be productive and enduring is if we all can agree on the facts. Right now, with Americans believing more than 40 percent of the news they see is fake, we aren’t quite there as a society, but that doesn’t mean we can’t be. The internet is an amazing tool, but to use it most effectively we have to embrace its benefits while also understanding the ways in which it makes us vulnerable. If students are still learning dated practices such as cursive writing in school, shouldn’t they be learning how to navigate and consume the internet responsibly as well?
If you would like to support movements similar to the ones mentioned here, contact your local school board officer today or get involved media literacy organizations such as such as Media Literacy Now and the Digital Citizenship Institute.