Facebook ditches racially neutral approach to policing hate speech

Posts containing comments about 'whites' and 'men' will be marked as 'low sensitivity'

Facebook is changing its hate speech algorithms to prioritize the removal of posts targeting minority groups.

The initiative, dubbed the WoW Project, aims to improve Facebook's automated systems that find and immediately delete hate speech and racial slurs, which are prohibited on the website, The Washington Post reported, citing internal documents.

Ticker Security Last Change Change %
FB NO DATA AVAILABLE - - -

Hate speech and slurs directed toward Black, Muslim, biracial, Jewish and LGBTQ users will be considered "the worst of the worst" as part of the new project and algorithmic changes, according to the outlet. Facebook confirmed the plans to Fox News, but said the company will still continue to review reported posts and take down all hate speech that violate its policies.

“We know that hate speech targeted towards underrepresented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious," a Facebook spokesperson said.

DOJ ACCUSES FACEBOOK OF DISCRIMINATING AGAINST AMERICAN WORKERS

The spokesperson continued: "Over the past year, we’ve also updated our policies to catch more implicit hate speech, such as content depicting Blackface, stereotypes about Jewish people controlling the world, and banned holocaust denial.  Thanks to significant investments in our technology we proactively detect 95% of the content we remove and we continue to improve how we enforce our rules as hate speech evolves over time."

Facebook said the project has been in the works since 2019 and that has relied on advice from external experts and external research to build it, naming Susan Benesch's Dangerous Speech project as an example.

(AP Photo/Jenny Kane, File)

The tech giant will use a new scoring system to rate offensive posts, and some may get more attention than they previously would have, according to the Post. The outlet gave the example of a statement such as "Gay people are disgusting" as one that would rate higher than a statement saying, "Men are pigs."

Additionally, algorithmic changes will aim to give less priority to problematic posts and comments about "whites," "men" and "Americans." Users can still report harmful speech containing these words, but the company's automated systems will mark them as "low sensitivity," and they may be less quick to find and delete posts containing such language.

FACEBOOK OVERSIGHT BOARD'S FIRST CASES DRAW SKEPTICISM

As a result, the company will be deleting approximately 10,000 fewer posts per day, documents show, according to The Washington Post.

"To me, this is confirmation of what we’ve been demanding for years, an enforcement regime that takes power and historical dynamics into account," Arisha Hatch, vice president of civil rights organization Color of Change, told the Post after reviewing the documents showing plans for the "WoW Project."

Previously, there was no scoring card, and mild hate speech directed toward white people was addressed in the same way as anti-Semitic and racist hate speech and slurs.

Civil rights groups including Color of Change, the NAACP, the Anti-Defamation League and the Sleeping Giants have been pushing Facebook to make significant changes to the way it addresses hate speech and hate groups.

The coalition organized an advertisement boycott against Facebook called the "Stop Hate for Profit" campaign over the summer, leading dozens of major brands to temporarily pull ads from the platform.

CLICK HERE TO GET FOX BUSINESS ON THE GO

The campaign's actions against Facebook came amid civil unrest across the U.S. after several officer-involved shootings of Black men, including George Floyd, who died in May.

Facebook has made changes to its political advertising and post-flagging policies in recent months. The website published the results of an independently conducted civil rights audit on July 8 that said it wasn't doing enough to eliminate misinformation.

TO READ MORE FROM FOX BUSINESS CLICK HERE