Google and Facebook deploying copyright scanning technology to stamp out 'extremist' views
But they don't want to talk about it…
Facebook, Google and other social media companies are deploying the same scanning systems that they use (or are supposed to use) to identify copyrighted material against views deemed "extremist" in a bid to block them.
That is the claim of newswire Reuters, citing "two people familiar with the process". It claims that the move is a "major step forward" for social media companies that are coming under ever-greater pressure to stop their websites from being used to plug extremist propaganda. However, there are fears that it could also be used to curb more mainstream views.
The technology was originally developed to automatically identify copyright-protected content on video sites so that they can be removed. It looks for hashes, a unique digital fingerprint, to identify the videos so that they can be quickly removed.
"The companies would not confirm that they are using the method or talk about how it might be employed, but numerous people familiar with the technology said that posted videos could be checked against a database of banned content to identify new postings of, say, a beheading or a lecture inciting violence," claimed Reuters.
It continued: "The two sources would not discuss how much human work goes into reviewing videos identified as matches or near-matches by the technology. They also would not say how videos in the databases were initially identified as extremist."
However, Seamus Hughes, deputy director of George Washington University's Program on Extremism, suggested that the issues involved were different from copyright or videos depicting child sex abuse, because they were "very clearly illegal". Extremist content, in contrast, exists on a spectrum and different companies might want to draw the line at different places. It could also draw accusations of censorship.
The claims follow an initiative in April from the Counter Extremism Project, a, "international policy organization formed to combat the growing threat from extremist ideology". It has offices in New York, Brussels and London.
They come after Dartmouth College Computer Science Professor and Counter Extremism Project Senior Advisor Dr Hany Farid went public in the US media about the development of the technology that was, he said, intended to "remove the ‘worst of the worst' extremist images, video, and audio from the Internet quickly and accurately".
Farid is also the creator of PhotoDNA, a Microsoft-backed system that detects child sex abuse videos and images as they are posted online, which was introduced in 2008