Digital chain letters exist because we tolerate them. We know that we won't get rich or avoid bad luck by sharing or passing them along but they serve a purpose for those who participate: they signify group membership. This is particularly true when it comes to social media posts that begin with "Most people won't share this" or "Nine out of ten people won't share this." These posts look to guilt participants into sharing something for awareness and they work because they provide an easy way of committing to something without a true obligation to act. They help us shape the personhood we would like to present online. For example, if your friend creates or Likes a post about refugees, you learn something about her. You can participate in this act of authorship by commenting or liking the post as well. In the case of chain letters, the effects of this visibility is two-fold: while chain letters derive a degree of authenticity from being visibly liked, so too does the reader gain authenticity by establishing and connecting themselves with an identifiable marker. Fake news has proliferated because it operates in the same way.
Following the 2016 presidential election, a lot of attention has been directed to "fake news"--i.e., the unverified and false claims that are posted as if they are legitimate, credible pieces of information on social media--which has been charged with influencing voters. Leading up to the election, prominent headlines declared that Pope Francis was backing Donald Trump for president, that George Soros had vowed to fund Black hate groups, and that an F.B.I. agency connected to Hillary Clinton's email leaks had been found dead in an apparent murder suicide. None of these items were true, but these pieces had greater social engagement numbers than mainstream news articles about corruption (e.g., "Trump's History of Corruption is Mind-Boggling. So Why Is Clinton Supposedly the Corrupt One?" - Washington Post) and candidate perception (e.g., "Stop Pretending You Don't Know Why People Hate Hillary Clinton" - Huffington Post) which came from recognized news channels. Much of the blame for the spread of this information has fallen on Facebook and Google, both of whom have recently taken steps weed out fake news due to the loud public demand that something be done to ensure better content management. They're looking to block ad revenue for the sites that they can identify. But there's also been a demand for the public to be more critical: a list of fake news sites has been widely circulated and there have been countless articles on how users can read and process information online more critically. While it remains to be seen whether real change will follow, it's hard to ignore the ways in which our social patterns have brought us to this point.
The nature of online social media is that it allows us to connect with anyone but then gives us the option of filtering the content that we engage with so that we have a highly curated experience. Pew Research Center reports that 62% of U.S. adults rely on social media for their news. Of that number, 44% rely primarily on Facebook and 64% will stick to one social site when consuming news media. This inclination reflects a major tenet of choice architecture: people will opt for the easiest option when presented with a decision. This is true offline as well as online; and whether we realize it or not, our lives are constructed to support this tendency because it is the avenue of known action for most people. As a result, we've built the world to enhance our automatic assumption of the “right” action. Online social networks have been primed to reflect these assumptions of human behavior: We're not inclined to vet the information our friends show us because we've curated the experience to highlight things that are important to our network. Our default inclination is to trust our network.
In the online realm, design decisions are driven by user experience experts, and they have carefully created an experience that encourages trust. On Facebook, the daily activity feed is called the News Feed; this nomenclature adds weight to the content that is displayed there as we associate news with importance. The Feed prioritizes information that has a high engagement rate within your network under the assumption that if your friends are talking about it, you want to know too. The Trending News section highlights what's hot right now based on the "news" that people are sharing. Additionally, while social media certainly invites the uninterrupted opportunity to share your opinion, members of your social network don't necessarily have to listen. The audience is also free to Unfollow the poster if they disagree with the information he or she is sharing or "mute" the poster and remain friends, which seems to be a decision based on social currency. Users may also comment and make a retaliatory point and then turn off notifications so that they aren't drawn into a back and forth debate. On Twitter the ease of resharing content, the forced brevity for commentary, and the ease of setting up an imposter account can all lead to rapid miscommunication. A simple click with your thumb allows you to reshare or like information that appeals to you or you can keep scrolling or unfollow users who share content that doesn't jive with your perspective. You can also allow people to follow you for the sake of social politics without reciprocating.
Trust is hard to ignore in the microcosm of Facebook groups, which reflect workings of the social network at large. Neighborhood community groups, for example, can provide a glimpse into concerns of residents. Based on what residents share it's easy to see how quickly members of a group come to rely on the other members for information. People use these groups to request and provide recommendations, report traffic disturbances and criminal activity, and share information about school events. Through these groups participants have the power to shape the culture of the neighborhood. Recommendations determine which businesses are visited and information on traffic and crime can impact where residents go and whether they sell their homes. Additionally, members who are connected to law and fire personnel have a line to information that may not make mainstream news but can help residents with issues of personal safety. These groups also reflect larger biases as the members work together to preserve the quality of life in their neighborhood by boycotting or protesting certain types of businesses or services or demanding police action against specific groups or areas that don't fit with their ideal.
This plays out similarly within broader social networks. While we may be connected to anyone, we only interact with the people who we view have a high social currency to us. These are our friends and family whose opinions can impact our perception. We easily fall into an echo-chamber online because there's little incentive to permit information that may discredit or challenge the perspectives we hold. When our social groups are challenged--in this case political affiliation--we're more likely to cling to the ideas being shared by people we trust or admire within our networks, whether or not those ideas are true. We're unlikely to challenge them because it may mean that we're running counter to our group at large. The best option is to go along or ignore the information and hope that it will go away. The reason for this is again to establish solidarity with the group.
Unfortunately, fake news is designed to be shared, much like chain letters. They get right to the heart of things that may be frightening to the group overall. Ignoring this media does not make it go away because the engagement it does get empowers it with the social currency of the group. It gains authenticity, and the shares that may result actually work to drive it farther away it moves from the nuisance of fact checking. The weight of support behind the post becomes its credibility.
In recent years following natural disasters, mass shootings, and acts of terrorism, these platforms became de facto citizen media outlets where users could share first hand accounts of events before the news media had a chance to report. This was readily apparent on Facebook in the days following the election when information about first hand encounters with hate crimes was shared and reshared. We're primed to consume media in specific ways and fake news has adopted these forms: there are bogus websites that publish "newsy"-type articles and then there are memes, which clearly are not traditional news, but propagate false headlines and "facts" (soundbites) by packaging them in an easily digestible and shareable format.
One of the ways choice architects combat default behavior is through organization. The more choices we have, the more people are likely to look for ways to simplify their options. If they’re unable to do so they may give up or resort to the default option. Finding ways to label or remove unverified information from the news feeds is a good start, but maybe it's time to consider how to more strongly combat default behavior and cue the user to be a more active participant in their social group.
Have something to say? Comments have been disabled on Anthropology in Practice, but you can always join the community on Facebook.
You might also like: