NRL News

Facebook announces progress in developing AI to silence ‘hate speech’

by | May 19, 2020

The social media platform says it now detects and removes 88.8 percent of what it considers ‘hate speech.’

By Calvin Freiburger

Facebook recently announced a new set of advances in development of artificial intelligence systems to police “hateful” conduct on its platform, despite the social media giant’s checkered history of algorithm-based content moderation flagging mainstream speech.

Facebook claims in a May 12 blog post that its current AI “proactively detects 88.8 percent of the hate speech content we remove,” and that it “took action on 9.6 million pieces of content for violating our hate speech policies” in just the first three months of 2020. This was made possible through giving their technology a “deeper semantic understanding of language, so our systems detect more subtle and complex meanings”; and “broadening how our tools understand content, so that our systems look at the image, text, comments, and other elements holistically.”

The post goes on to detail the challenges of programming AI to capture the various nuances of human communication, and the technical aspects of bringing their systems closer to doing so without actual humans reading the content in question.

“Facebook has established clear rules on what constitutes hate speech, but it is challenging to detect hate speech in all its forms; across hundreds of languages, regions, and countries; and in cases where people are deliberately trying to avoid being caught,” the company says. “As we improve our systems to address these challenges, it’s crucial to get it right. Mistakenly classifying content as hate speech can mean preventing people from expressing themselves and engaging with others. Counterspeech — a response to hate speech that may include the same offensive terms — is particularly challenging to classify correctly because it can look so similar to the hate speech itself.”

“These challenges are far from solved, and our systems will never be perfect,” Facebook concludes. “But by breaking new ground in research, we hope to make further progress in using AI to detect hate speech, remove it quickly, and keep people safe on our platforms.”

However, the premises that social media users need to be “kept safe” from offensive speech, or that efforts to do so are a net positive, remain hotly disputed.

For years, Facebook has been criticized for suppressing or otherwise discriminating against many right-of-center pages and posts, while multiple analyses have found that Facebook’s algorithm changes instituted at the beginning of 2018 disproportionately impacted conservative politicians and websites. Last year, an insider revealed that Facebook “deboosts” traffic to several mainstream conservative sites.

Facebook often reverses such censorship actions after sufficient media coverage pressures them to do so, insisting they were isolated errors rather than part of a willful pattern. Still, such incidents continue to happen, and conservatives say they do not trust Facebook’s content Oversight Board, which is largely staffed by left-wing figures, to protect their freedom of speech.

Most recently, Facebook has made itself an arbiter of “misinformation” and “harmful” speech related to the COVID-19 outbreak, with critics objecting that Facebook has crossed the line from merely quashing objectively false claims to censoring legitimate protest organizing and factual information that conflicts with the interests of the World Health Organization, which has been accused of covering for the Chinese government’s complicity in letting COVID-19 spread across the world.

Editor’s note. This appeared at LifeSiteNews and is reposted with permission.

Categories: Media Bias