Trump wants social media to detect mass shooters before they commit crimes

Trump wants social media to detect mass shooters before they commit crimes

Trump wants social media companies to work “in partnership” with law enforcement to find mass shooters ahead of time. That’s easier said than done. | SAUL LOEB/AFP/Getty Images

What’s more likely is that all sorts of speech — and people — would get swept up in the technology dragnets Trump seems to be proposing.

After mass shootings in both Dayton, Ohio, and El Paso, Texas, this weekend, President Donald Trump called on government organizations as well as social media companies to “develop tools that can detect mass shooters before they strike.”

Social media platforms like Facebook, YouTube, and Twitter are already detecting and deleting terrorist content. What’s new is that Trump’s statement specifically called for them to work “in partnership” with the Department of Justice and law enforcement agencies. The president’s comments have prompted questions about how this partnership would work, whether it would be effective, and what impact it could have on Americans’ civil liberties.

It’s also not clear whether social media companies will start seeking to identify the warning signs for a mass shooter before someone makes direct threats, in order to alert authorities.

Recode contacted the White House to clarify whether Trump’s statement means he’s asking social media companies to proactively report potential domestic terrorists but did not receive a response.

Facebook pointed us to its Community Standards Enforcement Report, which says the company already notifies law enforcement in cases of a “specific, imminent and credible threat to human life.” YouTube and Twitter also pointed us to their community guidelines but didn’t comment on whether they would proactively work with law enforcement to alert them to potential mass shooters.

“I think trying to use automated tools to predict [mass shootings] is a bad idea and wouldn’t work nearly as well as people who think tech is magic would like you to think it would,” Electronic Frontier Foundation Technology Projects Director Jeremy Gillula told Recode. “Tech is not a magic solution to society’s problems. You have to fix society at large.”

The FBI seems to be working on developing a tech solution quite like that. Earlier this month, the FBI put out a request for proposals for a “social media early alerting tool in order to mitigate multifaceted threats.” The proposal explains that the tool would “proactively identify and reactively monitor” social media to “enable the Bureau to detect, disrupt, and investigate an ever growing diverse range of threats to U.S. National interests.”

If such a tool is developed and used as Trump has suggested — to try to predict mass shooters before they act — it’s unlikely that it would work.

As Vox’s Brian Resnick and Javier Zarracina demonstrated, however commonplace mass shooters in the US may seem they are still very rare, and even a made-up model with 99 percent accuracy would not be enough to effectively pinpoint mass shooters in a population of 320 million people.

As Ben Wizner, a lawyer for the ACLU, put it, “The problem with that is we don’t yet have the tech to determine pre-crime, Minority Report notwithstanding. We need to understand that even if all mass shooters have said X, the vast majority of people who have said X don’t become mass shooters.”

What’s more likely to happen is that all sorts of speech — and people — would get swept up in the technology dragnets Trump seems to be proposing.

“It is possible that there are certain signals that let you know if an attack is happening,” Heidi Beirich, director of the Intelligence Project at the Southern Poverty Law Center, told Recode. “The question is, Can you look for those things and still guarantee civil and constitutional rights?”

She added, “Obviously the FBI does need to do some sort of scouring of extremist sites, but this is going to have to be very carefully conducted if we want to protect civil rights and civil liberties. When I think of Trump, I don’t think of him as the kind of person who’s going to do that.”

While the government is required to protect free speech — unless it directly incites violence — tech companies are under no such obligation. Social media platforms employ a mix of artificial intelligence and human moderators to identify terrorist content and remove it, with mixed results.

Facebook uses artificial intelligence, machine learning, and computer vision to find and delete terrorist content before it’s reported by users — though after it’s been posted to Facebook. The company also uses technology to identify potentially suicidal people based on what they post on Facebook and how their family and friends react to those posts. Human moderators then review posts flagged by this method and decide whether they should send the poster support options. In serious cases, Facebook says it contacts authorities to do wellness checks.

Similarly, YouTube relies on a combination of reported and automated mechanisms to flag terrorist content, which is then reviewed and deleted by humans. The company also takes steps to make sure such content doesn’t spread.

Twitter used to rely on user reports to take down extremist content, but increasingly is leveraging software to do so. In its latest reporting period, Twitter suspended 166,513 unique accounts for violations related to promotion of terrorism — more than 90 percent of which were surfaced using “internal, proprietary tools.”

Facebook, YouTube, and Twitter, among other tech companies, founded the Global Internet Forum to Counter Terrorism in 2017, with the mission to “substantially disrupt terrorists’ ability to promote terrorism, disseminate violent extremist propaganda, and exploit or glorify real-world acts of violence using our platforms.”

The path between stopping their posts and stopping their actions, however, is far from clear.

Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.

Author: Rani Molla

Read More

RSS
Follow by Email