After a string of terrorist attacks in Western countries over the past couple of years, major technology companies have been the targets of criticism from European government officials, advertiser boycotts, and lawsuits from family members of those killed in terrorist attacks. As a result, such companies are increasing their counter terrorism efforts.
On Thursday, Facebook announced it will start using artificial intelligence to target and remove posts that support extremism and terrorism from its website. The social network’s use of AI will automatically detect extremism propaganda, language, and groups of terrorist accounts.
And while Facebook hopes that its AI will learn how to identify and shut down extremist images and phrases, it still plans to use humans to fight terror. The company pointed to its in house counter terrorism specialists and growing content reviewing teams as evidence.
Monika Bickert, a former federal prosecutor who is now the Director of Global Policy Management at Facebook, said, “Just as terrorist propaganda has changed over the years, so have our enforcement efforts. We are now really focused on using technology to find this content so that we can remove it before people are seeing it. We want Facebook to be a very hostile environment for terrorists and we are doing everything we can to keep terror propaganda off Facebook.”
Facebook isn’t the only technology/social media and networking company that is using its influence to crack down on terrorism.
In December of 2016, Microsoft, Twitter, and YouTube, teamed up with Facebook to create a database for terrorist propaganda and recruitment. The database allows all four companies to quickly identify and take down extremist content. And from the end of 2015 through 2016, Twitter also suspended over 600,000 accounts for promoting terrorism.
Despite these efforts, however, terrorist groups are rapidly finding other ways to recruit and communicate.
One such way is through Telegram, an instant messaging app that has amassed over 100 million active users in two and a half years. According to experts, jihadi terrorists are using Telegram to communicate because the app not only allows encrypted messaging, but also includes secret chat rooms where extremist groups can spread their messages. Soon after this was revealed, Telegram created an “ISIS Watch” channel where users report ISIS communications. In a post on Twitter, Telegram wrote that it blocked “over 60 ISIS-related channels before they get any traction, more than 2,000 channels each month.”
While Facebook’s efforts should be applauded, not everyone is confident the new policies will do enough to actually fight terrorist ideology.
And others question Facebook’s focus on fighting extremism and terrorism. If the company wants to be “a hostile place for terrorists,” shouldn’t it also be a hostile place for white nationalists and neo-Nazis?
Read Facebook’s post on how they counter terrorism here.
Danita White for TechFunnel.com. Grant Suneson of Newsy contributed to this report.
Leave a Reply
You must be logged in to post a comment.