Over the past few weeks, Mark Zuckerberg has received a wash of accusations of complacency for Facebook’s deferred efforts at curtailing the fake news stories that circulate on the social media platform—particularly those that circulated during the presidential election. Fake Facebook news stories have even been cited by some to have influenced the results of the election. Whether that is verifiably true or not, there is something to be said about the circulation of false information on Facebook.
Just after the election, Zuckerberg called the claim that Facebook news skewed the election “a pretty crazy idea.” Facebook has long insisted that it is a technology company and not a publisher. But, while Facebook may have started out as a technology company, it has certainly evolved into much more.
Facebook has become a content platform. Facebook is the largest social network worldwide with nearly two billion users who combined share over 4 billion pieces of content daily. What’s more, not only does Facebook provide a platform for content but it also curates content through algorithms that decide which content gets promoted on a user’s News Feed.
Facebook has also become a news source. A recent study from Pew Research claims that 62 percent of people get their news from social media (with Facebook being the top social media news source), with 18 percent of people doing so very often.
Whatever Zuckerberg and other Facebook founding executives originally intended the social media site to be used as, that has changed. It’s more than 2 billion users now dictate the function of the platform by how they make use of it on a daily basis.
All this considered—the seemingly monopolizing global power that Facebook holds, the vehicle it gives to content, and the large audiences that turn to the site daily to obtain much of their news— it begs the question: should Facebook be responsible for curbing misinformation and fake news shared through their site?
The circulation of fake news and misinformation increased during the presidential election. With articles claiming that Pope Francis had endorsed Trump and that a federal agent who had been investigating Hillary Clinton was found dead. These articles, and others like them, were shared countless times across the site by users who believed the information to be valid. Buzzfeed News reports that “20 of the top-performing false election stories from hoax sites and hyperpartisan blogs generated 8,711,000 shares, reactions, and comments on Facebook.” While during that same time period, “20 of the best-performing election stories from 19 major news websites generated a total of 7,367,000 shares, reactions, and comments on Facebook.” According to this study, the fake news outperformed the real news.
Earlier this week Zuckerberg released another statement declaring that he takes misinformation very seriously, and that Facebook has been working on the issue of misinformation for a long time, calling the problem complex both technically and philosophically. And indeed it is a complex problem to tackle. Facebook must be careful not to discourage the sharing of opinions or mistakenly restricting accurate content. “We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties” he says.
While some of these false stories are created with the agenda of skewing the truth and misleading the reader, not all are. One source of fake news comes from organizations using the fake news as advertising to make money off of clickbait headlines. To combat these financial incentives, Zuckerberg acknowledged that Facebook will be “disrupting the economics” of its ads policies and has officially banned fake news websites from its ad network. Certainly this is a good first step to be made, though there is still something deeper that has not been acknowledged.
In Zuckerberg’s initial comment that Facebook influencing the election was “a pretty crazy idea,” he explained that this is because “voters make decisions based on their lived experiences.”
However, what is being overlooked is that, for it’s users, interactions that take place over Facebook are lived experiences. When you post content on social media and your friends on Facebook like, share, and engage in discussion about that content, that is a real life interaction that the user experiences mentally and emotionally as fully as if it were a face-to-face interaction. Those interactions do have at least the potential to influence a person’s thoughts and feelings on that subject.
This is all certainly a part of a much larger conversation. So much more can be said to a number of different aspects brought up by this subject. But one thing that seems clear is that tech is no longer just tech. It no longer operates soley within in the confines of its own compartmentalized screen-sized realm. As tech is increasingly incorporated into our social lives, and relied upon to aid in a number of necessary daily tasks, it will have to answer to much larger social, philosophical, and ethical issues.
Should people be turning to Facebook for their news, rather than getting it directly from any of the actual news sources, in the first place? I can’t answer that. But people are getting their news from Facebook. And that is something that Facebook, along with other social media sites and tech companies, are going to have to answer to in the not so distant future.