The full text of this article is available through the PGCC Library for PGCC students.

Overview Report from Opposing Viewpoints in Context

Media experts define fake news as factually false information, delivered in the context of a supposedly true news story and deliberately designed to deceive readers. In the twenty-first century, the term is used primarily to describe internet and social media disinformation campaigns. The distinction between misinformation and disinformation has a critical impact on public understanding of fake news. “Misinformation” typically describes falsehoods of fact that are spread either purposely or accidentally. Satire is an example of purposeful misinformation, while unintentional journalistic inaccuracies offer an example of accidental misinformation. “Disinformation,” on the other hand, always refers to information specifically designed to mislead or deceive consumers to influence their attitudes, beliefs, or behaviors. Thus, fake news is disinformation, not misinformation.

While fake news emerged as a high-profile social and political issue in 2016, the concept itself is not new. Some historians have dated the earliest precedents for the deliberate public circulation of disinformation to the Roman Empire, and some literary scholars have argued that fake news has been a feature of American media since the country’s inception. Twenty-first-century fake news differs from that of the past due primarily to advances in technology that have made it easier for purveyors of disinformation to create and distribute convincing illusions of truth. Technology can both enhance a false story’s appearance of credibility and facilitate its widespread circulation with unprecedented speed.

Many media and legal experts have raised concerns that US policies, processes, and legal statutes are inadequate and outdated for regulating fake news on the internet. In many cases, organized purveyors of fake news use paid social media–based advertising to spread false stories. Because twentieth-century lawmakers did not anticipate digital and online media, existing regulations have limited applicability. For example, under US law, foreign interests are not permitted to finance campaign ads that endorse or censure particular political candidates. However, as of October 2020, there were no US laws that prohibited foreign actors from using internet-based media to circulate disinformation that indirectly supports one candidate’s platform over another’s.

The Role of Tech Companies

According to experts on contemporary fake news, internet search engines and social media sites have fallen victim to organized groups that deliberately create and spread disinformation online. These groups, informally known as “troll farms,” exist in many places but are known to be particularly active in Russia. For example, a Russian-backed troll farm known as the Internet Research Agency (IRA) was at the center of the notorious disinformation campaign coordinated to interfere in the 2016 US presidential election.

The algorithms that tech companies use can make social media particularly vulnerable to facilitating the spread of fake news. Algorithms used to generate targeted content can be exploited by troll farm actors to play on existing political tensions on platforms like Facebook and Twitter, spreading disinformation designed to elicit strong emotional responses from users across the political spectrum. These organized operations typically focus on specific, divisive, and controversial issues, such as immigration and policing, to inflame disagreement among members of the voting public and further polarize public opinion.

Dividing the public into opposing segments serves to make the voting tendencies of each group easier to manipulate and contributes to the creation of online echo chambers. Echo chambers are spaces in which a user does not encounter alternative viewpoints but only hears views similar to their own repeated back to them. Social media’s structure supports echo chambers because users can insulate themselves from differing political opinions by following only individuals, groups, and organizations that reinforce their perspectives. Social media experts note that echo chambers can create the illusion that a given user’s viewpoint is the dominant political opinion, even if the user holds what are, in reality, fringe beliefs.

Many academic studies and government reports uncovered evidence that this type of organized interference influenced the outcome of the 2016 US presidential election. A 2018 analysis conducted by Ohio State University, for example, suggested that an intensive fake news campaign perpetrated by Russian troll farms likely had a substantial impact. The study examined Democratic defections—those who had voted for Barack Obama in 2012 but did not vote for Hillary Clinton in 2016—and showed a strong relationship between belief in three prominent fake news stories and defection. Multiple reports published by federal entities including congressional committees, intelligence agencies, and the Department of Justice established that Russian military intelligence actively exploited social media to dissuade people from voting for Clinton.

Technology industry executives were called to appear before the US Congress in 2018 to answer questions regarding possible Russian troll farm activity on their platforms and other consumer protection issues. Executives from Twitter and Facebook testified in open sessions, and their largely inconclusive testimony was criticized by some as evasive and by others as exhibiting a lack of awareness of their companies’ roles in the proliferation of disinformation online. Many observers noted that the problem of fake news in social media is likely to persist regardless of any government action. In September 2020 reports in the New York Times indicated that fake news circulating on Facebook and Twitter attempting to influence the 2020 US presidential election could be traced back to the IRA. Intelligence officials have expressed concern that such groups have become more sophisticated since 2016 and that, in the absence of action against Russian interference, other countries like China and Iran have begun carrying out similar campaigns.

A central point of debate in determining social media companies’ roles and responsibilities regarding the dissemination of fake news is whether these sites are publishers or platforms. As publishers, such sites can be held legally responsible for the content their users post, but as platforms, they cannot. Facebook, which has been heavily implicated in allegations of Russian interference, has made contradictory claims about its status. The company has described itself as a “platform” to the public while identifying itself as a “publisher” in court proceedings. Legal experts generally agree that the publisher-versus-platform issue will require a definitive resolution to control the spread of fake news on social media. Some lawmakers have proposed holding tech companies legally liable for false and defamatory information posted on their platforms, while others have suggested that internet service providers should assume legal responsibilities for identifying and halting the spread of false or manipulated data.

Even without a clear solution to the publisher-versus-platform issue and despite the reticence of some tech leaders to acknowledge responsibility, however, most major social media sites have implemented policy changes designed to help stem the flow of disinformation since 2016. For example, social platforms such as Facebook, Reddit, and Twitter have tightened their community guidelines and begun suspending users found to be participating in organized disinformation campaigns. In October 2020 Facebook identified and removed nearly three hundred accounts implicated in coordinated fake news plots. Twitter made headlines for adding disclaimers to tweets containing false or misleading information in 2020. In May the platform flagged tweets from President Donald Trump as “potentially misleading” for the first time. The tweets repeated the unfounded claim that encouraging mail-in voting during the pandemic would lead to widespread voter fraud. The president accused Twitter of election interference for fact-checking the tweets. In the days and weeks following the November 3, 2020, election, the president and members of his administration and legal team tweeted false information claiming widespread voter fraud and attempting to undermine election results in many states. By November 12 Twitter announced that it had marked about 300,000 tweets as having disputed or misleading information about the election. No proof of election fraud was found by federal agencies or state election officials. Courts throughout the judicial system, including the Supreme Court, dismissed Trump’s legal complaints for lack of evidence.

Weaponizing Information

Fake news has remained a highly politicized issue since the 2016 US presidential election. President Trump has repeatedly dismissed his media critics as purveyors of fake news, effectively turning it into an epithet used to demean media sources with perceived partisan biases. For example, Trump has consistently attacked credible media outlets like the New York Times and CNN for publishing thoroughly sourced and verified reporting while expressing favor for those of questionable journalistic integrity such as the National Enquirer and One America News (OAN). In a 2020 NPR/PBS NewsHour/Marist poll, 51 percent of voters characterized Trump’s actions as encouraging election interference. Of the respondents, 82 percent believed it was likely they would encounter misleading information on social media and 77 percent believed it was likely that foreign countries would be the source of disinformation about candidates in the 2020 election.

Trump’s repurposing of the label “fake news” has led some public officials and media personalities to characterize virtually any criticism of them or their policies as fake news. Some dismiss all journalism with which they disagree as fake news even if it contains no factual misinformation. Fake news, however, can have real-world repercussions. Multiple domestic terror incidents—including a series of pipe bombs sent to prominent Trump critics in the weeks leading up to the 2018 congressional midterm elections and the antisemitic mass shooting at the Tree of Life synagogue in Pittsburgh, Pennsylvania—have been perpetrated by those who were later found to have espoused extremely partisan views and shared fake conspiracy theories on social media.

With a presidential election and the outbreak of a global coronavirus pandemic in 2020, fake news continued to have far-reaching implications in politics and public life. Social media has been used to spread misinformation about COVID-19, including statements made on Trump’s official Twitter account regarding treatment options for, the lethal effects of, and the president’s own immunity to the virus. Not only has Twitter labeled these tweets as containing false information, but it has also implemented a function that prevents other users from sharing such tweets. Still, controlling the spread of political, medical, and public health disinformation online remains a central challenge for social media companies.

The changes to everyday life necessitated by the pandemic have contributed to the growth of at least one notable and far-reaching online conspiracy theory: QAnon. At the core of QAnon is a belief that Trump is leading a secret war against an elite class of Satanic child molesters. Accounts that actively spread this theory have been able to take advantage of the fact that more people are socializing, working, and learning predominantly online. By infiltrating private Facebook groups and linking misinformation about COVID-19 to the core QAnon conspiracy, purveyors of the fringe theory were able to move it into the mainstream. In July 2020 Twitter removed over seven thousand accounts related to QAnon in an attempt to mitigate offline harm. In October YouTube implemented warnings on QAnon content and, after less-drastic measures had failed, Facebook banned QAnon accounts on all of its platforms.

Policy makers note that journalists, private enterprises, politicians, and members of the public all have important roles to play in the fight against the influence of fake news. Introducing financial disincentives for propagating the spread of fake news and improving digital literacy among members of the news-consuming public are both consistently cited as important, actionable steps that could diminish the political impact of disinformation.

As foreign actors have continued to weaponize disinformation and attempt to influence the outcome of democratically held elections in the United States and Europe, both domestic and international organizations have advocated creating an international framework for dealing with fake news. They contend that foreign influence campaigns undermine the integrity of elections and cause voters to lose faith in the democratic process. With fake news implicated in voter suppression tactics, the Committee on House Administration issued a report in October 2019 on amending the Federal Election Campaign Act of 1971 to integrate online communications and advertising into the preexisting electioneering law. Furthermore, Democratic lawmakers reintroduced the Deceptive Practices and Voter Intimidation Prevention Act in the US House of Representatives in June 2019. The bill was first proposed in 2007 by Senators Barack Obama and Chuck Schumer. The 2019 version of the bill was not expected to become law during the 116th Congress ending in January 2021.

“Fake News on Social Media.” Gale Opposing Viewpoints Online Collection, Gale, 2021. Gale In Context: Opposing Viewpoints, link.gale.com/apps/doc/OBOYIA996440220/OVIC?u=pgcc_main&sid=bookmark-OVIC&xid=ea31b199. Accessed 5 Dec. 2021.

Full Text: COPYRIGHT 2021 Gale, a Cengage Company

 

License

Icon for the CC0 (Creative Commons Zero) license

To the extent possible under law, Lisa Dunick has waived all copyright and related or neighboring rights to Readings for Writing, except where otherwise noted.

Share This Book