Facebook Inc (NASDAQ:FB) Using Artificial Intelligence to Tackle ‘Terrorist Content’

The company said it uses algorithms to identify related material that may also support terrorism.

Amid growing pressure from governments, Facebook says it has stepped up its efforts to address the spread of “terrorist propaganda” on its service by using artificial intelligence (AI).

The world’s largest social media network, with 1.9 billion users, Facebook has not always been so open about its operations, and its statement was met with skepticism by some who have criticized US technology companies for moving slowly.

Image matching: Using this tech the AI identifies images or videos similar to those banned in the past and flags them before it reaches to the audience on Facebook. “We’re also learning over time, and sometimes we get it wrong”, Elliot Schrage, Facebook’s VP for public policy and communications, wrote in a blog post.

“In a blog post, Facebook’s director of global policy management, Monika Bickert, and counterterrorism policy manager Brian Fishman gave a rare behind-the-scenes look at how the social media giant searches for and removes terrorist content”.

How Should Platforms keep terrorists from spreading Propaganda?

Find Terrorist Clusters: The AI is created to look for terrorism-associated pages, posts, groups, personal accounts, and other materials that support terrorism content.

40 of the exposed moderators worked in the company’s counter-terrorism unit, and the company concluded that six of those workers’ profiles had likely been seen by potential terrorists.

She claimed that the company is taking strong new measures to sniff out fake accounts that are created by recidivist offenders. Facebook has also worked to ensure no terrorist activity takes place on any of their apps, including WhatsApp and Instagram.

Language understanding: Facebook is experimenting with analyzing text that has been flagged for praising terrorist organizations such as ISIS and Al Queda, in the hopes that they can learn how to automatically detect malicious content in the future. The company said it would to grow its community operations team by 3,000 over the next year to review flagged content, and that it has hired more than 150 counterterrorism experts who collectively speak almost 30 languages.

How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Bickert and Fishman stated that they want Fb to be a hostile place for terrorists because they believe technologies. Matches generally mean that either that Facebook had previously removed that material, or that it had ended up in a database of such images that Facebook shares with Microsoft, Twitter and YouTube.

The recent terrorist attacks, primarily focused in Europe, have prompted governments, news anchors, and average citizens to question technology’s role in helping terrorists spread their message and bring in new recruits.

Facebook also has more than 150 counterterrorism specialists.

Facebook are now doing to make people account into memorial pages that can be changed by a loved one.

Facebook said last month it plans to hire 3,000 reviewers to combat violent videos in addition to the 4,500 people already on the community operations team.