Facebook said Thursday that its artificial intelligence tools and systems are now removing 95% of hate speech content and removing it before users can report it.
Facebook, which has faced repeated criticism from civil rights groups and other human rights organizations about hate speech on its platform, said its AI systems can review posts in various contexts and multiple languages.
The company has spent the last few years developing and using the AI technology.
“When we first began reporting our metrics for hate speech, in [fourth quarter] of 2017, our proactive detection rate was 23.6%,” Arcadiy Kantor, Facebook product manager for integrity, said in a statement Friday. “Today we proactively detect about 95% of hate speech content we remove.
“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritize the more nuanced cases, where context needs to be considered, for our reviewers.”
Facebook said its efforts have greatly reduced hate speech on the platform.
“Because hate speech depends on language and cultural context, we send these representative samples to reviewers across different languages and regions,” Kantor added.
“Based on this methodology, we estimated the prevalence of hate speech from July 2020 to September 2020 was 0.10% to 0.11%. In other words, out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech.”
Facebook has recently taken more aggressive steps to eliminate hate speech, including changing policies to ban posts about things like Holocaust conspiracy theories and fringe groups like QAnon.
Copyright 2020 United Press International, Inc. (UPI). Any reproduction, republication, redistribution and/or modification of any UPI content is expressly prohibited without UPI’s prior written consent.
This content is published through a licensing agreement with Acquire Media using its NewsEdge technology.