The companies are to create a shared database of unique digital fingerprints – known as “hashes” – for images and videos that promote terrorism. This could include terrorist recruitment videos or violent terrorist imagery or memes. When one company identifies and removes such a piece of content, the others will be able to use the hash to identify and remove the same piece of content from their own network.
“We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online,” said the companies in a shared statement.
Because the companies have different policies on what constitutes terrorist content, they will start by sharing hashes of “the most extreme and egregious terrorist images and videos” as they are most likely to violate “all of our respective companies” content policies, they said.
The precise technical details remain to be established, said Facebook, although the approach echoes that adopted to tackle child sexual abuse imagery. The same companies use the National Center for Missing and Exploited Children’s PhotoDNA technology, developed by Microsoft, to identify images of child sexual abuse. However, with PhotoDNA the images are categorized centrally by law enforcement and the technology companies are legally obliged to remove the content.
Earlier this year Hany Farid, the computer scientist who helped develop PhotoDNA, proposed a sister program for extremist content. He teamed up with the Counter Extremism Project to develop a system that could proactively flag extremist photos, videos and audio clips as they are posted online.
“We are happy to see this development. It’s long overdue,” he told the Guardian, explaining that he has been in conversations with Facebook and Microsoft since January.
Despite welcoming the announcement he remained cautious, particularly because of the lack of an impartial body to monitor the database: “There needs to be complete transparency over how material makes it into this hashing database and you want people who have expertise in extremist content making sure it’s up to date. Otherwise you are relying solely on the individual technology companies to do that.”
The strength of PhotoDNA comes from the single central database, he said. “If it’s removed from one site, it’s removed everywhere. That’s incredibly powerful. It’s less powerful if it gets removed from Facebook and not from Twitter and YouTube.
“What we want is to eliminate this global megaphone that social media gives to groups like Isis. This doesn’t get done by writing a press release.”
Technology companies have been under pressure from governments around the world over the spread of extremist propaganda online from terror networks such as Isis.
In January top White House officials met with representatives from Apple, Facebook, Twitter and Microsoft to explore ways to tackle terrorism.
“We are interested in exploring all options with you for how to deal with the growing threat of terrorists and other malicious actors using technology, including encrypted technology,” said a briefing document released before the secretive summit.
“Are there technologies that could make it harder for terrorists to use the internet to mobilize, facilitate, and operationalize?”
Facebook said the latest initiative was not the direct result of the January meeting. But it said all the companies agreed there was no place for content that promotes or supports terrorism on their networks.
This article was written by Olivia Solon in San Francisco, for theguardian.com on Tuesday 6th December 2016 01.47 Europe/Londonguardian.co.uk © Guardian News and Media Limited 2010
Have something to tell us about this article?