Cybersecurity & Tech Surveillance & Privacy

A Gumshoe Reporter's Revealing Behind-the-Scenes Look at Facebook

Paul M. Barrett
Thursday, January 25, 2024, 10:11 AM

A review of “Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets” (Doubleday, 2023)

The Facebook logo on a keyboard. (Pixabay, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In the wake of the Jan. 6, 2021, riot at the Capitol, executives at Facebook sought to demonstrate that the social media platform bore no responsibility for the violent, conspiratorial rhetoric that culminated in the insurrection. Mark Zuckerberg, founder and chief executive of the company, testified before Congress in March 2021 about the effectiveness of Facebook’s filtering technology. “More than 95 percent of the hate speech that we take down is done by an AI [artificial intelligence] and not by a person,” he told lawmakers. “And I think it’s 98 or 99 percent of the terrorist content that we take down is identified by an AI and not a person.”

There were two problems with Zuckerberg’s assertion. First, it didn’t reveal anything about the percentage of hate speech or terrorist content that Facebook actually removes. It indicated only the degree to which the company claimed that it relies on AI, as opposed to human intervention, to remove certain harmful content.

Second, it appears that Zuckerberg’s claim was highly misleading. In October 2021, a wave of leaks by former Facebook employee Frances Haugen revealed, among many other things, that fully 95 percent of the hate speech posted on Facebook stays on Facebook. In one internal document, the company’s own analysts stated: “We estimate that we may action as little as 3-5% of hate and ~0.6% of V&I [violence and incitement] on Facebook despite being the best in the world at it.”

Now, Jeff Horwitz, the Wall Street Journal reporter who shepherded most of Haugen’s revelations into print, has published an authoritative book putting the leaks into a broader context. “Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets” hands up a withering indictment of one of the most influential companies of the digital era and raises the question whether the social media industry generally does more harm than good.

One through line in Horwitz’s reporting is that for all of Zuckerberg’s talk about his mission being to “connect every person in the world,” most key decisions at the company now known as Meta are actually driven by its core business imperative of selling advertising. Companies buying ads want evidence that a growing number of consumers are paying attention to their pitches. To please advertisers, Facebook—and its sister platform, Instagram, and Google’s YouTube and X (formerly known as Twitter) and TikTok, which is controlled by the Chinese company ByteDance—have designed algorithms that determine whether content gets recommended and possibly “goes viral” or gets pushed to the bottom of users’ feeds, where few will ever see it. The algorithms vary in their details, but all of them highly prioritize “engagement,” a metric that refers to whether users discernibly react to content by sharing, liking, or commenting on it.

Engagement has a dark side, however. As Horwitz explains, experience and empirical research reveal that content that provokes anger, fear, and division is most likely to drive high engagement numbers and “go viral.” As a result, platform algorithms tend to promote sensational, conspiracy-minded content, as well as misinformation. There are steps that Meta and other companies employing the advertising-engagement business model can, and occasionally do, take to curb the harmful effects of the algorithms they have created. But in most cases, Horwitz writes, these initiatives are instituted for short periods and then are rolled back at the insistence of top management so that engagement, user growth, and revenue flows aren’t compromised for long. 

“Broken Code” brought to mind a conversation I once had with Roger McNamee, a prominent investor in the initial, tamer version of Facebook, which started in 2004. Over time, McNamee became a prominent critic of the company (and, not incidentally, author of his own book, “Zucked: Waking Up to the Facebook Catastrophe,” which was published in 2020). As it has evolved, according to McNamee, Meta has become like an arson-prone fire department. It continually kindles harmful content fires and then, when these blazes draw negative attention, it rushes to put out the fires with an elaborate content moderation system and emergency tamping-down procedures, like temporarily restricting re-shares or labeling falsehoods that might endanger lives. In “Broken Code,” Horwitz notes that Facebook executives themselves refer to the flare-ups that go public as “PR fires” and the short-lived emergency responses as “break the glass” measures, as in breaking the glass to pull a fire alarm.

Shooting the Messengers

Another theme that unifies “Broken Code” is the division within Meta between top management and rank-and-file employees, including content moderators, researchers with social science doctorates, and myriad others with the word “integrity” in their titles. Over the years, the rank-and-file have continually pushed for reforms that would mitigate the harmful side effects of the company’s activities, only to have occupants of the C-suites shoot them down. Horwitz’s strength is the specificity of his reporting, often buttressed by internal documents made public by whistleblower Frances Haugen. Consider:

  • In 2017, Facebook AI and data science experts collaborated on an algorithm fix that would dampen the distribution of content disproportionately popular with “hyperactive users”—people who post hundreds of times a day. “Because hyperactive users tended to be more partisan and more inclined to share misinformation, hate speech, and click bait, the intervention produced integrity gains almost across the board,” Horwitz writes. But Joel Kaplan, the company’s chief Washington lobbyist and a former senior aide to Republican President George W. Bush, objected that the technique, known as “Sparing Sharing,” would affect conservative publishers and activists disproportionately. That’s because conservative hyperactive users are far more prevalent on Facebook than their liberal counterparts, Horwitz writes, and have demonstrated more of a tendency to share misinformation. Zuckerberg, according to the author, listened for 10 minutes to a late-2017 pitch for Sparing Sharing. “Do it, but cut the weighting by 80 percent,” the CEO ordered. “And don’t bring me something like this again.” Reduced to one-fifth of its proposed impact, Horwitz writes, “the arrival of Sparing Sharing wasn’t going to meaningfully change the platform.” This constituted a choice by Zuckerberg, not an inevitable aspect of faceless technology.

  • In 2018, the CEO publicly announced another adjustment, called “Meaningful Social Interactions.” He declared in a Facebook post that the change would shift the platform’s focus from “helping you find relevant content to helping you have more meaningful social interactions.” What that vague promise meant in practice, Horwitz recounts, was that Facebook “wanted to elicit specific user behaviors” that would turbocharge engagement. Under the new system, “a re-share was worth up to 30 times as much as a like. Comments were worth 15 times as much, and emoji responses worth five.” Facebook integrity staffers knew immediately that prioritizing re-shares, comments, and emojis would favor angry, factually challenged content. And sure enough, subsequent in-house research cited by Horwitz on the effects of Meaningful Social Interactions found that it favored material that triggered “negative user sentiment.” Other internal Facebook research, according to Horwitz, found that political parties in Poland and Spain noticed the change and explicitly “shifted the proportion of their posts from 50/50 positive/negative to 80 percent negative and 20 percent positive.”

  • In 2020, company researchers warned Nick Clegg, now president of global affairs and Zuckerberg’s top lieutenant, that he should not argue that higher rates of polarization among the elderly—the demographic that generally uses social media the least—constituted evidence that Facebook does not contribute to political divisiveness. “Internal research points to an opposite conclusion,” the Facebook employees wrote. Horwitz elaborates that “Facebook, it turned out, fed false information to senior citizens at such a massive rate that they consumed far more of it despite spending less time on the platform.” Clegg was unmoved. In the wake of the Jan. 6 riot, the former British politician published an essay on Medium that cited the internally debunked claim as disproving that “we have simply been manipulated by machines all along.” 

Horwitz diligently notes that Meta spokespeople contest his characterizations. The company told him, for example, that the appropriate takeaway from Clegg’s Medium piece on polarization was that “research on the topic is mixed.” But that is not a good-faith reading of the social science literature (which colleagues and I have written about in detail).

To understand Clegg’s verbal sleight of hand, one must give his convoluted prose an exceedingly close reading. In the Medium piece, he wrote: “What evidence there is simply does not support the idea that social media, or the filter bubbles it supposedly creates, are the unambiguous driver of polarization that many assert.” But serious analysts of social media do not assert that Facebook and the other platforms are the sole or even primary “driver of polarization.” Instead, serious observers argue that social media companies have played an exacerbating role. They pour fuel on the fire—a blaze that began burning decades before social media was invented and that over the years also has been fed by right-wing talk radio, Fox News, websites like Gateway Pundit, podcasts like the Daily Wire, Donald Trump and his minions, and, yes, to a lesser degree by left-wing firebrands like the Grayzone website and Cenk Uygur’s Young Turks channel on YouTube. Clegg, in other words, knocks down a straw man. He exemplifies a disingenuousness on the part of Meta’s leadership that Horwitz illustrates repeatedly throughout his book.

“Broken Code” is not perfect. Horwitz covers a lot of ground but does not get to everything that’s wrong with Meta’s business model. For instance, he does not examine the weaknesses in content moderation stemming from the company’s decision years ago to outsource the vast majority of its content-review workforce to lower-wage companies and geographies, a practice common throughout the industry. Meta does not hire, train, or supervise most of the 15,000 people charged with the onerous but vital task of determining whether content featuring hate, gore, and/or lies should remain on its platforms or be down-ranked or removed. By treating human content moderation at arm’s length and paring labor costs to a minimum, social media companies avert their gaze from the difficulty of this vital function and inevitably undercut its effectiveness. 

But this quibble should not deter anyone concerned about the pernicious side effects of social media from reading “Broken Code.” It represents the best kind of traditional gumshoe journalism and demonstrates the relevance of rigorous factual reporting in a digital era dominated by hot takes and vituperation.


Paul Barrett is the deputy director and senior research scholar of the Center for Business and Human Rights at New York University’s Stern School of Business and an adjunct professor at the NYU School of Law. He formerly worked for more than 30 years for the Wall Street Journal and Bloomberg Businessweek and is the author of four nonfiction books, including the New York Times bestseller “Glock: The Rise of America’s Gun.”

Subscribe to Lawfare