Documents indicate Facebook scramble as Capitol attacked

Former Facebook data scientist Frances Haugen speaks during a hearing of the Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security, on Capitol Hill in Washington in this Tuesday, Oct. 5, 2021 file photo. (Matt McClain/The Washington Post via AP)
Former Facebook data scientist Frances Haugen speaks during a hearing of the Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security, on Capitol Hill in Washington in this Tuesday, Oct. 5, 2021 file photo. (Matt McClain/The Washington Post via AP)

As supporters of Trump stormed the U.S. Capitol on Jan. 6, 2020, battling police and forcing lawmakers into hiding, an insurrection of a different kind was taking place inside the world's largest social media company.

Thousands of miles away, in California, Facebook engineers were racing to tweak internal controls to slow the spread of misinformation and inciteful content.

Emergency actions -- some of which were rolled back after the 2020 election -- included banning Trump, freezing comments in groups with records for hate speech, filtering out the "Stop the Steal" rallying cry and empowering content moderators to act more assertively by labeling the U.S. a "Temporary High Risk Location" for political violence.

But other measures, such as preventing groups from changing their names to terms such as Stop the Steal, were not fully implemented because of last-minute technology glitches, according to a company spreadsheet.

At the same time, frustration inside Facebook rose over what some saw as the company's halting and inconsistent response to rising extremism in the U.S.

"Haven't we had enough time to figure out how to manage discourse without enabling violence?" one employee wrote on an internal message board at the height of the Jan. 6 turmoil. "We've been fueling this fire for a long time, and we shouldn't be surprised it's now out of control."

It's a question that still hangs over the company today, as Congress and regulators investigate Facebook's part in the Jan. 6 riots.

Facebook has publicly blamed the proliferation of election falsehoods on former President Donald Trump and other social platforms.

In mid-January, Sheryl Sandberg, Facebook's chief operating officer, said the Jan. 6 riot was "largely organized on platforms that don't have our abilities to stop hate."

Mark Zuckerberg, Facebook's CEO, told lawmakers in March that the company "did our part to secure the integrity of our election."

But newly obtained company documents show the degree to which Facebook knew of extremist movements and groups on its site that were trying to polarize American voters before the election.

The documents also give new details on how aware company researchers were after the election of the flow of misinformation that posited that votes had been manipulated against Trump.

Sixteen months before last November's presidential election, a researcher at Facebook described an alarming development. She was getting content about the conspiracy theory QAnon within a week of opening an experimental account, she wrote in an internal report.

On Nov. 5, two days after the election, another Facebook employee posted a message alerting colleagues that comments with "combustible election misinformation" were visible below many posts.

Four days after that, a company data scientist wrote in a note to his co-workers that 10% of all U.S. views of political material -- a high figure -- were of posts that alleged that the vote was fraudulent.

In each case, Facebook's employees sounded an alarm about misinformation and inflammatory content on the platform and urged action -- but the company failed or struggled to address the issues.

The internal dispatches were among a set of Facebook documents obtained by The New York Times that give new insight into what happened inside the social media network before and after the November election, when the company was caught flat-footed as users weaponized its platform to spread lies about the vote.

What the documents do not offer is a complete picture of decision-making inside Facebook. Some internal studies suggested that the company struggled to exert control over the scale of its network and how quickly information spread, while other reports hinted that Facebook was concerned about losing engagement or damaging its reputation.

Yet, what was unmistakable was that Facebook's own employees believed the social network could have done more, according to the documents.

"Enforcement was piecemeal," read one internal review in March of Facebook's response to Stop the Steal groups, which contended that the election was rigged against Trump. The report's authors said they hoped the post-mortem could be a guide for how Facebook could "do this better next time."

Many of the dozens of Facebook documents reviewed by the Times have not been previously reported. Some of the internal reports were initially obtained by Frances Haugen, a former Facebook product manager turned whistleblower.

Andy Stone, a Facebook spokesperson, said the company was "proud" of the work it did to protect the 2020 election. He said Facebook worked with law enforcement, rolled out safety measures and closely monitored what was on its platform.

"The measures we did need remained in place well into February, and some, like not recommending new, civic or political groups remain in place to this day," he said. "The responsibility for the violence that occurred on Jan. 6 lies with those who attacked our Capitol and those who encouraged them."

A QANON JOURNEY

For years, Facebook employees warned of the social network's potential to radicalize users, according to the documents.

In July 2019, a company researcher studying polarization made a startling discovery: A test account that she had made for a "conservative mom" in North Carolina received conspiracy theory content recommendations within a week of joining the social network.

The internal research, titled "Carol's Journey to QAnon," detailed how the Facebook account for an imaginary woman named Carol Smith had followed pages for Fox News and Sinclair Broadcasting. Within days, Facebook had recommended pages and groups related to QAnon, the conspiracy theory that falsely claimed that Trump was facing down a shadowy cabal of Democratic pedophiles.

By the end of three weeks, Carol Smith's Facebook account feed had devolved further. It "became a constant flow of misleading, polarizing and low-quality content," the researcher wrote.

"We've known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups," the researcher wrote. "In the meantime, the fringe group/set of beliefs has grown to national prominence with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream."

INTO ELECTION DAY

Facebook tried leaving little to chance with the 2020 election.

For months, the company refined emergency measures known as "break glass" plans -- such as slowing down the formation of new Facebook groups -- in case of a contested result. Facebook also hired tens of thousands of employees to secure the site for the election, consulted with legal and policy experts, and expanded partnerships with fact-checking organizations.

In a September 2020 public post, Zuckerberg wrote that his company had "a responsibility to protect our democracy." He highlighted a voter registration campaign that Facebook had funded and laid out steps the company had taken -- such as removing voter misinformation and blocking political ads -- to "reduce the chances of violence and unrest."

"As soon as the election was over, they turned them back off or they changed the settings back to what they were before, to prioritize growth over safety," Haugen said in an interview with "60 Minutes."

Many measures appeared to help. Election Day came and went without major hitches at Facebook.

But after the vote counts showed a tight race between Trump and Joe Biden, then the Democratic presidential candidate, Trump posted in the early hours of Nov. 4 on Facebook and Twitter: "They are trying to STEAL the Election."

The internal documents show that users had found ways on Facebook to undermine confidence in the vote.

On Nov. 5, one Facebook employee posted a message to an internal online group called "News Feed Feedback." In his note, he told colleagues that voting misinformation was conspicuous in the comments section of posts. Even worse, the employee said, comments with the most incendiary election misinformation were being amplified to appear at the top of comment threads, spreading inaccurate information.

Even so, Facebook began relaxing its emergency steps in November, three former employees said. The critical postelection period appeared to have passed, and the company was concerned that some preelection measures, such as reducing the reach of fringe right-wing pages, would lead to user complaints, they said.

JAN. 6

On the morning of Jan. 6, with protesters gathered near the U.S. Capitol building in Washington, some Facebook employees turned to a spreadsheet. There, they began cataloging the measures that the company was taking against election misinformation and inflammatory content on its platform.

User complaints about posts that incited violence had soared that morning, according to data in the spreadsheet.

Over the course of that day, as a mob stormed the Capitol, the employees updated the spreadsheet with actions that were being taken, one worker involved in the effort said. Of the dozens of steps that Facebook employees recommended, some -- such as allowing company engineers to mass-delete posts that were being reported for pushing violence -- were implemented.

Zuckerberg and Mike Schroepfer, Facebook's chief technology officer, posted notes internally about their sadness over the Capitol riot. But some Facebook employees responded angrily, according to message threads viewed by the Times.

"I wish I felt otherwise, but it's simply not enough to say that we're adapting, because we should have adapted already long ago," one employee wrote. "There were dozens of Stop the Steal groups active up until yesterday, and I doubt they minced words about their intentions."

Another wrote: "I've always felt that on the balance my work has been meaningful and helpful to the world at large. But, honestly, this is a really dark day for me here."

In a Jan. 7 report, the scope of what had occurred on Facebook became clear. User reports of content that potentially violated the company's policies were seven times the amount as previous weeks, the report said. Several of the most reported posts, researchers found, "suggested the overthrow of the government" or "voiced support for the violence."

POST-MORTEMS

In March, Facebook researchers published two internal reports assessing the company's role in social movements that pushed the election fraud lies.

In one, a group of employees said Facebook had exhibited "the pattern." That involved the company initially taking "limited or no action" against QAnon and election delegitimization movements, only to act and remove that content once they had already gained traction. The document was earlier reported by The Wall Street Journal.

Part of the problem, the employees wrote, was that Facebook's election misinformation rules left too many gray areas. As a result, posts that "could be construed as reasonable doubts about election processes" were not removed because they did not violate the letter of those rules.

Those posts then created an environment that contributed to social instability, the report said.

Another report, titled "Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement," laid out how people had exploited Facebook's groups feature to rapidly form election delegitimization communities on the site before Jan. 6.

"Hindsight being 20/20 makes it all the more important to look back, to learn what we can about the growth of the election delegitimizing movements that grew, spread conspiracy, and helped incite the Capitol insurrection," the report stated.

Another study turned over to congressional investigators, titled "Understanding the Dangers of Harmful Topic Communities," discussed how like-minded individuals embracing a borderline topic or identity can form "echo chambers" for misinformation that normalizes harmful attitudes, spurs radicalization and can even provide a justification for violence.

Examples of such harmful communities include QAnon and hate groups promoting theories of a race war.

"The risk of offline violence or harm becomes more likely when like-minded individuals come together and support one another to act," the study concludes.

Information for this article was contributed by Ryan Mac and Sheera Frenkel of The New York Times; and by Alan Suderman and Joshua Goodman of The Associated Press.

Upcoming Events