SAN FRANCISCO, June 17, (Agencies): Facebook said it is ramping up the use of artificial intelligence in a push to make the social network “a hostile place” for extremists to spread messages of hate. Pressure has been building on Facebook, along with other internet giants, who stand accused of doing too little, too late to eradicate hate speech and jihadist recruiters from their platforms. In a joint blog post, the social network’s global policy management director Monika Bickert and counterterrorism policy manager Brian Fishman said Facebook was committed to tackling the issue “head-on.”
“In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online,” Bickert and Fishman said in the post. “We want Facebook to be a hostile place for terrorists,” they said, adding: “We believe technology, and Facebook, can be part of the solution.” They described how the network is automating the process of identifying and removing jihadist content linked to the Islamic State group, al-Qaeda and their affiliates, and intends to add other extremist organizations over time.
Artificial intelligence is being used, for instance, to recognize when a freshly posted image or video matches one known to have been previously removed from the social network — which counts nearly two billion users and involves more than 80 languages. Facebook is also experimenting with machine smarts to understand language well enough to identify words or phrases praising or supporting terrorism, according to the post. And the social network is using software to try to identify terrorism- focused “clusters” of posts, pages, or profiles. Facebook said it has also gotten better at detecting fake accounts created by “repeat offenders” previously booted from the social network for extremist content.
The effort extends to other Facebook applications, including WhatsApp and Instagram, according to Bickert and Fishman. Meanwhile, because AI can’t catch everything and sometimes makes mistakes, Facebook is also beefing up its manpower: it previously announced it would hire an extra 3,000 staff to track and remove violent video content. “We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly,” Bickert and Fishman said.
Facebook, Twitter, Microsoft and Google-owned YouTube announced a drive last December to stop the proliferation of jihadist videos and messages showing beheadings, executions and other gruesome content. But they remain under intense scrutiny, and G7 leaders last month issued a joint call for internet providers and social media firms to step up the fight against extremist content online Facebook, meanwhile, is getting ready to explain itself.
The social media juggernaut kick-started an effort to more openly debate questions of free speech and censorship, false and misleading news and the impact social media has on democracy Thursday, announcing a series of posts that aims to explain the thinking and internal debates behind some of the company’s policies. “As more and more of our lives extend online, and digital technologies transform how we live, we all face challenging new questions — everything from how best to safeguard personal privacy online to the meaning of free expression to the future of journalism worldwide,” wrote Facebook VP of Public Policy and Communications Elliot Schrage in a blog post. “We debate these questions fiercely and freely inside Facebook every day — and with experts from around the world whom we consult for guidance,” he wrote. “We take seriously our responsibility — and accountability — for our impact and influence.” To start a public conversation around these subjects, and explain Facebook’s stance, the company will try to answer what Schrage called “hard questions.” A first post, also published Thursday, explored how social networks should fight the spreading of terrorist propaganda online.