Are Facebook’s efforts to reduce misinformation after the Capitol attack too little, too late?
Facebook has temporarily banned ads for gun accessories and tactical gear through at least January 22, two days after President-elect Joe Biden’s inauguration.
The decision comes after BuzzFeed News reported that, despite concerns from employees, the tech company had advertised body armor, gun holsters, and other military equipment alongside content about election misinformation and the Capitol riot.
It’s just one of several actions Facebook has been pressured to take after the January 6 insurrection at the Capitol building that left five people dead. Several senators called upon the company to halt its military gear ads following the insurrection, but a number went further, arguing it wasn’t just the advertisements that ought to draw concern but also the platform itself. Rep. Alexandria Ocasio-Cortez (D-NY) said during a virtual town hall on Friday that “Mark Zuckerberg and Facebook bear partial responsibility for Wednesday’s events.”
Though the attacks seemed like a surprise to Capitol Police — who were severely underprepared to stop the mob of Trump supporters, QAnon believers, neo-Nazis, and Proud Boys from storming the Capitol — security experts had warned of the potential severity of the protest. After all, the plans were being hashed out in plain sight for weeks on social media. Platforms like Facebook and Twitter are now being forced to reckon with how they allowed extremist rhetoric and the organization of a violent protest to thrive and spread online.
“Everyone who was a law enforcement officer or a reporter knew exactly what these hate groups were planning,” Washington DC attorney general Karl Racine told MSNBC. “They were planning to descend on Washington DC, ground center was the Capitol, and they were planning to charge and, as Rudy Giuliani indicated, to do combat justice at the Capitol.”
Facebook has attempted to minimize its responsibility for what happened, instead blaming niche social networks like Parler, where far-right content goes unchecked. On January 11, Facebook COO Sheryl Sandberg said in an interview with Reuters, “We again took down QAnon, Proud Boys, Stop the Steal, anything that was talking about possible violence last week,” adding, “Our enforcement is never perfect, so I’m sure there were still things on Facebook. I think these events were largely organized on platforms that don’t have our abilities to stop hate, don’t have our standards and don’t have our transparency.”
Yet evidence suggests Facebook was crucial for organizers to spread misinformation and awareness of the protest. Eric Feinberg, vice president of the Coalition for a Safer Web, told the Washington Post that 128,000 people were using the hashtag #StopTheSteal on the site in the days leading up to the attack. Media Matters also reported that two dozen Republican Party officials and organizations used Facebook to coordinate bus trips to Washington, DC for the rally that led up to the insurrection. The platform was at least critical enough for organizing the event that one senator asked Facebook to keep records of all of the related content to use as potential evidence in legal action against the rioters. And it wasn’t until days after the attack that Facebook said it would remove #StopTheSteal related content.
“If you took Parler out of the equation, you would still almost certainly have what happened at the Capitol,” Media Matters president Angelo Carusone told Salon. “If you took Facebook out of the equation before that, you would not.” Parler has also faced consequences for its ultra-radical approach to free speech. Amazon Web Services, which previously hosted the app, took it offline, and it has yet to find a new service provider. Google and Apple also removed it from their app stores.
Facebook and other social platforms face increasing pressure for more oversight
Perhaps the most effective method social networks have used to combat misinformation and hyperpartisan information is the simplest: Just blocking Trump. After Twitter permanently banned Trump on January 8, online misinformation about election fraud dropped 73 percent the following week. Now that nearly every major social network, including Salesforce, which hosted the Trump campaign’s email listserv, has followed suit, it’s likely that the trend continues internet-wide.
To some, these actions may come as too little, too late. Facebook has been used as an organizing tool for the 2017 Charlottesville white supremacist rally, the nationide anti-mask protests in 2020, and the QAnon conspiracy theory. When asked why it hasn’t done more to stop the spread of extremist beliefs and groups, the company has typically deferred to the idea that it is protecting its users’ free speech.
But Dipayan Ghosh, a former advisor to Facebook and the Obama White House and the leader of the Harvard Kennedy School’s Digital Platforms and Democracy Project, argued in the Washington Post that Facebook can no longer be trusted to moderate its own content without outside regulation. “Ultimately, we must have better protections in place, protections that counteract the opacity of social media algorithms with radical transparency and the uninhibited collection and use of personal data with consumer privacy rights. Meanwhile, we must rethink the legal mechanisms — namely, Section 230 of the Communications Decency Act — the industry has employed to shield itself from the content moderation debate.”
Though the Trump presidency has been marked by its anti-regulatory approach, under Biden, the FCC could theoretically pass a sweeping legal agenda that includes bringing back net neutrality, expanding internet access, and a retooled Section 230, although the latter may not be a major priority for his administration. Biden’s inauguration certainly won’t fix the massive polarization of social media, but it’s possible that the internet could be a more stable place after he takes office.
Author: Rebecca Jennings