Facebook Built Its Vision of Democracy on Bad Data

Spread the love

Mark Zuckerberg took to Facebook Wednesday to once more defend himself and his platform. Responding to a cavalierly-tweeted charge of anti-Trump bias from the President of the United States, Zuckerberg again repeated his claim that Facebook was [a platform for all ideas,” and that, contrary to unfolding public opinion, his company did much more to further democracy than to stifle it. For evidence, Zuckerberg—as is his wont—turned to the data. “More people had a voice in this election than ever before,” he wrote. “There were billions of interactions discussing the issues that may never have happened offline.” He also pointed to the number of candidates that used Facebook to communicate, and the amount of money they spent publishing political advertising on his platform.

Zuckerberg has made this kind of quantitative argument before. In his first letter to investors in 2012, he wrote that “people sharing more … leads to a better understanding of the lives and perspectives of others” and “helps people get exposed to a greater number of diverse perspectives.”

These arguments rest on a simple equation: The amount of information that a population shares is directly proportional to the quality of its democracy. And, as a corollary: the more viewpoints that get exposed, the greater the collective empathy and understanding.

That math has worked out well for Facebook for most of its history, as it convinced its users to share more information in the name of community and openness. It found its ultimate expression in the Arab Spring, when protestors around the Middle East connected over Facebook to have conversations they couldn’t in public. In retaliation, some of those threatened governments shut down the internet, only proving the point: good guys spread information, and bad guys try to stop it.

But as Facebook has grown, that math no longer pertains. Today, Facebook users perform two very different functions; they are both sources and recipients of information. Zuckerberg’s formulation, that more information is always empowering, may be true when I’m sharing information—I certainly benefit from my ability to say whatever I want and transmit that information to anyone in the world. But it’s not necessarily the case when it comes to receiving information. Future Shock author Alvin Toffler saw the problem back in 1970, when he coined the term “information overload.” “Just as the body cracks under the strain of environmental overstimulation,” he wrote, “the ‘mind’ and its decision processes behave erratically when overloaded.” Fellow futurist Ben Bagdikian expressed similar concerns, writing that “the disparity between the capacity of machines and the capacity of the human nervous system” results in “individual and social consequences that are already causing us problems, and will cause even more in the future.”

Zuckerberg’s corollary, that exposure to more viewpoints makes you more informed, doesn’t fare any better. By that logic, CNN’s shoutfest-panels, in which a half-dozen consultants yell at one another, should be the most illuminating show on television. (It isn’t.) Or take Facebook executive Andrew Bosworth’s argument that the News Feed provides a more balanced picture of the world than The New York Times because it exposes its audience to more opinions.

We are certainly hearing more from one another than ever before. Ideas that were once dismissed as fringe, from white supremacy to socialism, are getting expressed and openly shared. By Zuckerberg’s math, that should be producing a more cohesive society and a better-functioning democracy. But that isn’t happening, because of what Zuckerberg’s equation leaves out.

Zuckerberg seems to think that Facebook users, armed with more information than ever, are perfectly capable of drawing their own conclusions. After the election, he dismissed claims that Facebook-borne fake news had swung the vote to Trump as condescending: “Voters make decisions based on their lived experience,” he said. Twitter made a similar argument in June, when its vice president of public policy, government and philanthropy wrote that its users wouldn’t be swayed by fake news on its platform because they “journalists, experts, and engaged citizens tweet side-by-side correcting and challenging public discourse in seconds.” Trusting users to discern meaning from a barrage of tweets is the informational equivalent of the mythical homo economicus, the perfectly rational consumer who always acts in his own self-interest. It’s also a familiar argument for anyone who has railed against the power that our self-designated cultural gatekeepers exercised to…

Source link