November 14, 2016

This Election Facebook Failed Us All

By the now the narrative is clear: The outcome of Tuesday’s election was a surprise to many individuals in the US and around the world and to supporters of both political parties.

In any instance when a predicted outcome is proven wrong there is a reflection period, which often sparks a curiosity. Presidential campaigns have used the Internet to support their efforts dating back to 1996, and the role of the Internet and social networks on election outcomes has significantly picked up over the past several election cycles. Facebook COO, Sheryl Sandberg estimated that over two million people registered to vote after seeing a reminder on Facebook this year, and Pew estimates that in 2016 the majority of U.S. adults (62%) get their news from social media channels.

As a population, we have become educated on the impact social media can have in shaping election outcomes, particularly since the wildly successful use of social media in 2008 by Obama’s campaign. As social networks including Facebook, Instagram and Twitter become increasingly influential in shaping public opinion, these networks have also evolved to take on more control of how information is shared, sorted and prioritized.

The Evolution of Facebook’s Algorithm

To understand how we got here, it’s worth looking back at the history of Facebook’s algorithm. In 2009 Facebook debuted a new type of default sorting order. Previously, it had been a reverse chronological listing of updates/photos. The new order was based on popularity. Popularity was quantified by engagement on each post. Subsequent updates were made in 2013, 2015 and 2016. Each update refined further which information users would see in their newsfeed, as Facebook attempted to provide users with more personalized content that met their interests. This also allowed Facebook to target advertising to specific users based on geographic location, age, hobbies and interests. This targeting also applies to political beliefs. The Wall Street Journal recently published an analysis that demonstrates the difference in the types of news an individual associated with a specific political party would see in their newsfeed.

The issues created or reinforced by Facebook’s algorithm are three-fold:

1) Much of the news shared on Facebook is coming from biased or fake news sites. Individuals with no political affiliation whatsoever were rolling out websites and sharing biased news simply to turn a profit. For example, BuzzFeed reported over 100 pro-Trump sites being run by teenagers in Macedonia who say they “don’t care about Donald Trump” and are simply responding to straightforward economic incentives. These individuals were publishing sensationalist and often false content that caters to Trump supporters to up their web traffic and make their websites more profitable. Many of these Facebook sites have hundreds of thousands of followers and Buzzfeed News’ research found that “the most successful stories from these sites were nearly all false or misleading.”

2) This creates an echo chamber which applies to many situations both inside and outside of politics. According to a recent NPR article: “algorithms, like the kind used by Facebook, instead often steer us toward articles that reflect our own ideological preferences, and search results usually echo what we already know and like.”

3) It’s unclear how Facebook assigns political affiliation to an individual. Individuals are increasingly using Facebook as their only source of news and information because they expect to consume a large and diverse variety of research, news and opinion all in one place. While it’s likely that Facebook takes a larger number of variables into consideration when designing each users’ newsfeed, such as affiliations of connections, comments/likes, age range, geography, education, etc., the ultimate formula is not clear. Since the inner workings of the process are not fully understood, undecided voters may take certain actions, like commenting on or sharing a piece of partisan information in an effort to learn more, which could potentially skew the content that is directed to them on a moving forward basis. The biggest issue is that users don’t have control of their own journey based on their actions, so it’s not possible to protect how user’s newsfeeds evolve based on the actions they take. It’s very likely that most users aren’t aware that this is even happening.

The biggest concern is that the majority of Facebook users likely have little to no understanding of Facebook’s algorithm and may not realize that the content they see is being uniquely targeted to them. Most users also are not able to discern the authority of the news they are viewing and whether it may be biased.

Fact vs. Fiction

The amount of fake content floating around on social networks is also a growing concern, particularly coupled with Facebook’s algorithm that gives more authority to posts that receive more engagement. Two such examples in this election include a fake Donald Trump quote in People Magazine which initially had a watermark for The Other 98%, a popular left-leaning Facebook page according to AOL.com, and a fake Hillary Clinton quote, which initially ran onTheRightists.com.com, an independent News platform that allows people and independent Journalist to bring the news directly to the readers, according to Snopes.com.

Websites like www.snopes.com run fact checks to help users understand which information is accurate, and a quick search shows that neither candidate ever said either of these things.

Facebook has created a “report fake news” button that is meant to allow users to help Facebook screen and remove false news and information from the site. The problem is that many users don’t fact check information before they share it, and the current algorithm can allow false information to become viral in a very short period. Significant and irreversible damage can be done in a matter of minutes. Because the algorithm awards popularity and engagement, the truth behind some of this false news and information is much less likely to make it into the newsfeeds of users who have already shared the false information.

What is the Solution?

Facebook and other social networks are incredibly powerful tools for disseminating information and generating awareness, but social media users have no responsibility to verify information or vet the information they share, or to educate themselves on how to distinguish fact from fiction. One of the fundamental principles of social media is that users can share their views on anything, and it’s often used as a protected space to share opinions.

The question then lies in the responsibility of social networks to act a responsible third party in terms of what content they promote and why. Before algorithms came into play it would be harder to make this point, but as soon as social networks began organizing and sorting information and prioritizing information for users, they also assumed some responsibility for virality of content and how information is shared.

Google’s algorithms incorporate many variables, but credibility of news sources plays a significant role in rankings. Moving forward, Facebook may consider ranking news sources for validity and indicate the rank on posts, or prioritizing credible news sources over less credible ones. Better measures must also be put in place to remove false information and a more effective system should be implemented to let users know if they shared something that was factually untrue.

As social networks increasingly take on more power in shaping our collective thinking they must also take on more responsibility for how information is presented, what information is presented, and whether that information is even accurate.

By: Stephanie Dressler, Senior Vice President at Dukas Linden Public Relations