In a new letter to Facebook CEO Mark Zuckerberg, US Sen. Michael Bennet is accusing the company of not doing enough to stop the spread of political misinformation. | Photo by Alex Edelman/Getty Images

US Sen. Michael Bennet sent a letter to Mark Zuckerberg asking how he’s preventing the company from helping undermine democratic elections

Facebook says it’s expanding its efforts to fight misinformation on its platforms ahead of the 2020 US presidential elections. But many people, including top ranking US politicians, aren’t convinced.

On Monday, US Sen. Michael Bennet (D-CO) sent a letter to Facebook CEO Mark Zuckerberg in which he called out Facebook for its “inadequate” efforts so far to stop manipulated media campaigns that have wreaked havoc on democratic processes around the world. In his letter, Bennet, who recently ended his bid for the White House in 2020, shares examples from countries such as the Philippines, where President Rodrigo Duterte’s campaign circulated viral disinformation such as a false endorsement from the pope to help the candidate win the 2016 election, as well as Brazil, where 87 percent of Facebook’s WhatsApp users in the country reported seeing fake news on the platform during the country’s 2018 presidential elections.

“In dozens of countries, Facebook has unparalleled power to shape democratic norms and debate, and as a result, elections,” states the letter, which was shared with Recode. “I am concerned that Facebook, as an American company, has not taken sufficient steps to prevent its platforms from undermining fundamental democratic values around the world.”

Sen. Bennet’s letter asks for more specifics about how exactly Facebook will stop the reach of disinformation and hate speech — such as how many content reviewers it has hired for different languages, whether Facebook has “country-specific information” about the average amount of time that content violating its community standards remains on the platform before it is removed, and what steps Facebook plans to take to protect “vulnerable populations,” such as journalists or ethnic, racial, and religious minorities, from threats or harassment. Bennet gave Zuckerberg until April 1 to respond to the questions.

Bennet is not the only one who has called on Facebook to do more on these issues. In December, the Democratic National Committee sent a letter to Facebook COO Sheryl Sandberg expressing concern that the company was not devoting enough resources to detecting manipulative media campaigns on its platform ahead of the elections. In July, one of the company’s own fact-checking partners criticized the company for not being transparent enough about the impact of its work to reduce false information on the platform. And in October, 250 of Facebook’s employees signed an internal letter asking the company to reverse its policy of allowing political advertisements containing lies to run on the platform, such as a Trump ad that makes false claims about former Vice President Joe Biden.

Bennet’s letter is a reminder that social media companies still have a political misinformation problem. Since 2016, foreign actors, lobbyists, and even political campaigns have developed new and creative ways to skirt Facebook’s anti-misinformation policies. And as the 2020 elections get closer, politicians have used tactics like fake ads, paid social media armies, and manipulated videos to drum up support and drown out critics. Just last week, Facebook came under fire for allowing Democratic presidential nominee Mike Bloomberg to pay users to post content that blurs the line between an advertisement and a regular post.

That’s not to say Facebook hasn’t been doing anything differently since 2016, when Russian trolls spread disinformation on the platform to stoke US political divides, and the Trump campaign hired outside consultants such as Cambridge Analytica, which controversially exploited Facebook users’ private data in order to influence their vote. Facebook now uses third party fact-checkers (although some argue, not nearly enough for its more than 2 billion users) to review some viral political posts; it labels pages and ads from media outlets it considers to be state-controlled; and it spends money to fund protection for political campaigns from cyberattacks, among other measures. Earlier this month, the company announced that it took down about a dozen accounts linked to Iran and 80 linked to Russia that attempted to manipulate users with misinformation.

But politicians like Bennet are questioning whether all of that’s enough — and are demanding more information.

Read the letter in full below:

Dear Mr. Zuckerberg:

Recently, you wrote in the Financial Times that “Facebook is not waiting for regulation” and is “continuing to make progress” on issues ranging from disinformation in elections to harmful content on your platforms.[i] Despite the new policies and investments you describe, I am deeply concerned that Facebook’s actions to date fall far short of what its unprecedented global influence requires. Today, Facebook has 2.9 billion users across its platforms, including Messenger, WhatsApp, and Instagram.[ii] In dozens of countries, Facebook has unparalleled power to shape democratic norms and debate, and as a result, elections. I am concerned that Facebook, as an American company, has not taken sufficient steps to prevent its platforms from undermining fundamental democratic values around the world.

Globally, misuse of Facebook platforms appears to be growing worse. Last year, the Oxford Internet Institute reported that governments or political parties orchestrated “social media manipulation” campaigns in 70 countries in 2019 (up from 28 in 2017 and 48 in 2018). Oxford found that at least 26 authoritarian regimes used social media “as a tool of information control… [and] to suppress fundamental human rights, discredit political opponents, and drown out dissenting opinions.” It reported that Facebook was authoritarians’ “platform of choice.”[iii]

Case after case suggests that Facebook’s efforts to address these issues are insufficient. Ahead of both the Brazilian presidential election in 2018 and the European Union elections in 2019, Facebook reportedly took steps to limit misinformation on its platforms.[iv] Nevertheless, 87 percent reported seeing fake news on the platform.[v] Facebook’s own analysis of the election found that it was unable to prevent large-scale misinformation, according to media reports.[vi] In a survey of eight European countries ahead of the E.U. elections, the nonprofit group Avaaz found that three-fourths of respondents had seen misinformation on the platform.[vii] The European Commission also criticized Facebook’s lack of transparency about the effectiveness of steps taken to curb disinformation ahead of the election.[viii]

In the Philippines, Facebook staff trained Rodrigo Duterte’s campaign, which then used the platform to circulate disinformation, including a fake endorsement from the pope and a fake sex tape of a political opponent. Since winning, Duterte has paid armies of online trolls to harass, dox, and spread disinformation about journalists and political opponents on Facebook.[ix] Although Facebook has since organized safety and digital literacy workshops while hiring more Tagalog speakers, journalists still contend that Facebook hasn’t “done anything to deal with the fundamental problem, which is they’re allowing lies to be treated the same way as truth and spreading it…Either they’re negligent or they’re complicit in state-sponsored hate.”[x]

In Myanmar, military leaders have used Facebook since 2012 to inflame tensions between the country’s Buddhist majority and Muslim Rohingya minority.[xi] The United Nations said Facebook played a “determining role” in setting the stage for a military assault in 2016 that displaced at least 700,000 people.[xii] Facebook was reportedly warned of these dangers as early as 2013, but over two years later, it had hired just four Burmese speakers to review content in a country with 7.3 million active users at the time.[xiii] Over this period, a Facebook official also acknowledged that its systems struggled to interpret Burmese scripts, making it harder to identify hate speech.[xiv]

Even this partial record raises concerns. The Myanmar and the Philippines cases highlight the dangers of introducing and expanding platforms without first establishing the local infrastructure to mitigate the effects of hate speech and other dangerous incitement.[xv] In Brazil and Europe, even when Facebook made concerted efforts to mitigate the spread and impact of disinformation in elections, its measures were inadequate.[xvi]

As we approach critical elections in 2020, not only in the United States, but also in countries such as Egypt, Georgia, Iraq, and Sri Lanka, Facebook must swiftly adopt stronger policies to limit abuses of its platforms and to absorb lessons learned from the cases cited above.[xvii] I ask that you provide updates to the following questions by no later than April 1, 2020:

● What steps is Facebook taking to limit the virality of disinformation and hate speech?

● What has Facebook learned from its efforts to limit coordinated inauthentic behavior in the Brazilian and European Union elections? What new investments, policies, and other measures will Facebook adopt based on these cases?

● How does Facebook address disinformation spread by government officials or state-sponsored accounts, and does it adjust recommendation algorithms in these cases?

● How many content reviewers have you hired for different languages spoken by users?

● What steps has Facebook taken to improve its capacity to interpret non-English scripts to ensure its automated systems can detect content in violation of its community standards?

● Does Facebook have country-specific information about the average time content in violation of its community standards remained on the platform before its removal?

● Does Facebook conduct in-depth assessments, such as human rights audits, for the markets in which it operates? If so, how often does Facebook update these assessments?

● Beyond enforcing Facebook’s community standards, what steps does Facebook plan to take to protect vulnerable populations, such as journalists or ethnic, racial, and religious minorities, from threats or harassment on its platforms?

Thank you for your attention to these issues.


Michael F. Bennet


















Author: Shirin Ghaffary

Read More

Leave a Reply

Your email address will not be published.