Facebook Details How It Fights Terrorism and Extremists

Klint Finley

It'd Be Great to Kick ISIS Offline—If It Were Possible

Brian Barrett

Orlando Shows the Limits of Facebook's Terror Policing

Related Stories

  • Davey Alba

    Should Facebook Block Offensive Videos Before They Post?

The report follows increasing calls by WIRED and others for Facebook to provide greater insight into its moderation efforts. “I wish it had happened a little earlier,” says Michael Kenney, a counterterrorism expert at the University of Pittsburgh. “People in the counter-terrorism community have been talking about this for many years now.”

Pieces in Place

The report aside, Clarke says the clearest indication that Facebook wants to get this right came last year when it hired Fishman to lead its counterterrorism efforts. Fishman has a deep understanding of the online strategies deployed by Al-Qaeda and ISIS, and has used that expertise to help governments and nonprofits combat extremism. His background in academia will help Facebook apply policies and technologies backed by strong research.

Artificial intelligence is one of those technologies. Although Facebook already deploys such tools against copyright infringement and child pornography, the company has until now kept mum on how it might use AI to fight extremism.

In their post, Fishman and Bickert say Facebook's AI team trains its algorithms to identify extremist images and language, automatically delete new accounts created by banned users, and identify terrorist clusters.

“We know from studies of terrorists that they tend to radicalize and operate in clusters,” the authors write. “This offline trend is reflected online as well. So when we identify Pages, groups, posts or profiles as supporting terrorism, we also use algorithms to 'fan out' to try to identify related material that may also support terrorism.”

Clarke and Kenney applaud the effort. “The emphasis on clusters shows they’ve clearly studied the background, and these are people who know the empirical evidence on terrorism,” says Clarke. “The amount of time it would take for a human to sift through all this stuff is not feasible.”

Humans do sift through a lot of that stuff, though. Facebook’s global moderation team reviews flagged materials and blocks accounts when necessary. (The company plans to add 3,000 people to the team in coming months.) The grueling job pays little, and comes with psychological and physical risks. The Guardian reported that for one month last year, an error in Facebook’s code revealed the identities of moderators who had banned jihadists from the site. “That is a huge mistake,” Clarke says.

Some of the moderators fled their homes, fearful of retribution. Facebook said it will consider having moderators use administrative accounts rather than their personal profiles.

No Easy Fix

All of which underscores the many ways counterterrorism efforts can go awry. Facebook is hardly alone here—it works with other social media platforms and companies to tackle the problem, using shared technology that fingerprints extremist images and videos. But as these platforms crack down, they risk simply pushing the problem somewhere else.

“There is a potential cost, as more and more people with bad intentions get pushed to the dark web does that lower our capacity to follow and monitor and potentially disrupt their activities,” Kenney says.

Facebook understands the risk, because it owns the encrypted chat application WhatsApp—a tool terrorists can use to communicate securely. “Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages — but we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies,” Fishman and Bickert wrote.

And in the grand scheme of things, extremism online comprises only one part of the broader story of terrorism, which spreads mostly through face-to-face interactions.

All of which is another way of saying: It's complicated. And every complication reveals a new complication. Will these efforts be enough to stop terrorism from spreading online? No. Will there be mistakes? Absolutely. But Facebook is addressing the problem. And it's finally explaining how.

No Comments Yet.

Leave a comment