UK unveils new technology to fight extremist content online
The British government is unveiling new technology designed to remove extremist material from social media, amid mounting pressure on companies like Facebook and Twitter to do more to remove such content from their platforms.
The software, developed by ASI Data Science with funding from the government, was announced Tuesday by Home Secretary Amber Rudd ahead of meetings with technology executives and U.S. Secretary of Homeland Security Kirstjen Nielsen this week in Silicon Valley. The program will be shared with smaller companies that don't have the resources to develop such technology, the agency said.
"I hope this new technology the Home Office has helped develop can support others to go further and faster," Rudd said before the meetings. "The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society."
Governments and law enforcement agencies have been pressing social media companies to do more to prevent extremists from using their sites to promote violence and hatred. British Prime Minister Theresa May has called on internet companies to remove extremist propaganda from their sites in less than two hours.
But extremist content is only one type of objectionable content on the internet, with governments struggling to stem the flow of everything from child pornography to so-called fake news. The importance of the battle was underscored during the 2016 U.S. presidential election, during which Russian entities sought to influence to outcome by placing thousands of ads on social media that reached some 10 million people on Facebook alone.
Social media companies have struggled to respond. Because the companies see themselves not as publishers but as platforms for other people to share information, they have traditionally been cautious about taking down material.
Amid growing pressure, Facebook, Twitter, Google and its unit YouTube last year created the Global Internet Forum to Combat Terrorism, which says it is committed to developing new content-detection technology, helping smaller companies combat extremism and promoting "counter-speech," content meant to blunt the impact of extremist material.
Unilever, a global consumer products company and one of the world's largest advertisers, on Monday demanded results, saying it wouldn't advertise on platforms that do not make a positive contribution to society. Its chief marketing officer, Keith Weed, said he's told Facebook, Google, Twitter, Snap, and Amazon that Unilever wants to change the conversation.
"Consumers ... care about fraudulent practice, fake news, and Russians influencing the U.S. election," he said at a digital advertising conference, according to excerpts of a speech provided by Unilever. "They don't care about good value for advertisers. But they do care when they see their brands being placed next to ads funding terror, or exploiting children."
So far, though, the technology needed to detect and remove dangerous posts hasn't kept up with the threat, experts say. Removing such material still requires judgment, and artificial intelligence has not proved good enough to determine the difference, for example, between an article about the so-called Islamic State and posts from the group itself.
The software being unveiled Tuesday is aimed at stopping the vast bulk of material before it goes online.
Marc Warner, CEO ASI Data Science, which helped developed the technology, said the social media giants can't solve this problem alone.
"The way to fight that is to cut the propaganda off at the source," he said. "We need to prevent all of these horrible videos ever getting to the sort of people that can be influenced by them."
Tests of the program show it can identify 94 percent of IS propaganda videos, according to the Home Office, which provided some 600,000 pounds ($833,000) to fund the software's development.
But experts on extremist material say even if the software works perfectly it will not even come close to removing all Islamic State material on line.
Charlie Winter, Senior Research Fellow at the International Center for the Study of Radicalization at King's College London, said the program only focuses on video and video is only a small portion of "the Islamic state corpus."
"I think it's a positive step but it shouldn't be considered a solution the problem," he said. "There's so much more that needs to be done."