Facebook turns to AI to fight terrorism online

Facebook has been quietly using an arsenal of artificially intelligent tools to help identify and remove extremist content before it can be seen by the larger community, according to the company.

But any goodwill earned by that post seems to have lasted less than a day, as a report revealed on Friday that a “bug” affecting more than 1,000 Facebook content moderators inadvertently exposed some of their identities to suspected terrorists. The company made it clear that it is serious in dealing with the issue, and terrorists should not have a voice on social media.

Facebook has hired over 150 counter-terrorism specialists in its fight against terrorists, as well as looking to develop and deploy smart technology. “The machine learning algorithms work on a feedback loop and get better over time”.

“Just as terrorist propaganda has changed over the years so have our enforcement efforts“.

The company also uses algorithms to try to identify “clusters” of terrorists, by finding accounts that appear to be associated with or similar to disabled terrorist accounts.

The company are now experimenting with AI to work with the ability to understand the text in support of terrorist organisations and therefore develop text-based signals that may relate to terrorism.

Furthermore, the company reiterated its continued work with governments, global bodies and other companies in the sector on taking down extremist content and tracing terrorists online. “We are now really focused on using technology to find this content so that we can remove it before people are seeing it”, Monika Bickert, a former federal prosecutor helping Facebook’s efforts, told USA Today.

The company published a detailed news post this week that provides the most complete explanation yet of the approach it takes when dealing with terrorism. After the terrorist attack in London this month, British Prime Minister Theresa May attacked Web companies for providing a “safe space” for people with violent ideologies.

“We don’t want terrorists to have a place anywhere in the family of Facebook apps”.

Matches generally mean that either that Facebook had previously removed that material, or that it had ended up in a database of such images that the company shares with YouTube, Twitter and Microsoft.

“This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place”, a ministry spokesman said on Thursday.

But it insists that at the moment, AI can not catch everything and its algorithms are not yet as good as people when it comes to understanding terrorist-related context.

“A photo of an armed man waving an IS flag might be propaganda or recruiting material, but could be an image in a news story”.

Facebook’s community operations team looks at content reported by users to ensure it does not violate the company policies.

Pressure had been mounting on Facebook, along with other internet giants, who stand accused of doing too little, too late to eliminate hate speech and jihadist recruiters from their platforms.