For every 10,000 content views posted on Facebook in the third quarter about 10 were included in hate speech, said the social media company while disclosing numbers on the prevalence of hate speech on its platform, for the first time in its history.
The estimate was released in its quarterly content moderation report by the largest social media company in the world even as it has been under scrutiny over the way it handles abuses and hate speech on the platform – which was evident in the November’s US presidential election.
In the third quarter, the company took measure against 22.1 million pieces of hate speech content and the company proactively identified about 95 per cent of them, the company said, compared to 22.5 million posts of hate speech in the previous quarter.
Taking measure for Facebook is taking down the content or masking the content with a warning or disabling the identified content to escalating it to external agencies, the company said.
In attempts to put pressure Facebook to act against hate speech, a widespread advertising boycott against Facebook was organised by civil rights groups this summer.
Since then, Facebook has not only conceded to disclose the hate speech metric but also to allow itself to be reviewed by an independent auditor about over its enforcement record. The hate speech metric is calculated by examining a representative sample of content seen on Facebook.
The audit would be completed “over the course of 2021”, said Guy Rosen, Facebook’s head of safety and integrity on a call with reporters.
Sufficient context for a full assessment of its performance was still lacking in Facebook’s new metric, said the Anti-Defamation League, one of the groups behind the ad boycott.
“We still don’t know from this report exactly how many pieces of content users are flagging to Facebook — whether or not action was taken,” said ADL spokesman Todd Gutnick. Such data is important because “there are many forms of hate speech that are not being removed, even after they’re flagged”, Gutnick said.
Comparable prevalence metrics are not revealed by Facebook’s rivals Twitter and YouTube, owned by Alphabet’s Google.
More than 2,65,000 pieces of content from Facebook and Instagram were removed in the United States by Facebook over violating its voter interference policies between March 1 to the November 3 election, Facebook’s Rosen also said.
Facebook announced back in October that its hate speech policy was being updated so that it could ban content that denies or distorts the Holocaust. That was in contrast to what Facebook’s Chief Executive Mark Zuckerberg had said publicly about what should be allowed to be posted on the platform.
In the third quarter, action was taken on 19.2 million pieces of violent and graphic content, which was more than the 15 million pieces of content in the second quarter, Facebook said.
Action against 4.1 million pieces of violent and graphic content was taken by it on Instagram.
The US Congress grilled Zuckerberg and Twitter CEO Jack Dorsey earlier this week on the content moderation practices of the companies.
With reference to the post by former Trump White House adviser Steve Bannon in which he urged the beheading of two US officials, Zuckerberg had said in an all-staff meeting that the post had not violated enough of the policies of the company to demand a suspension, reports published last week in the media had claimed.
There was also criticism of social media platform in recent months of permitting sharing false election claims and violent rhetoric by large Facebook groups to gain traction.
(Adapted from NDTV.com)