YouTube Relying More On Tech For Vetting Content With Moderators Remaining At Home

There has been a sharp rise in removed videos form the video sharing platform YouTube, including some that had not violated its policies, because of the increasing reliance of the firm on technology to moderate content, YouTube said on Tuesday.

More than 11.4 million videos were removed from the platform for violating its policies between April and June, said the Google-owned company. That number is three time greater tan what it had done in the previous three months.

Spam, nudity and child safety were the reasons for removal of most of the videos, YouTube said. These are aspects that are defined by the company as behavior harmful or exploitative to children, for example, those videos related to abuse or dares and challenges that could endanger minors, the company said.

The spike in removed videos also coincided with the decision of YouTube to make greater use of technology to vet harmful content and their talking down given the fact that most of the human reviewers hired by the company had to be sent home because of the novel coronavirus pandemic during the three month period.

This resulted in the company over enforcing its removal policies, the company said. Appeals for reinstatement were filed for about 325,000 of the removed videos and almost half of them were later reinstated after the company ascertained that those videos had not violated any of its policies and rules.

“When reckoning with greatly reduced human review capacity due to Covid-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement,” YouTube said in a blog post.

Another reason for increase in removal of videos was more flagging of controversial videos by users, more of whom are now at home because of the pandemic – both uploading and flagging videos, aid a YouTube spokesperson. However data about the growth in the number of uploaded content during the quarter was not provided by the company. Further, no statistics that show the percentage of total videos that the company takes down, have also never been provided by the company. That makes it difficult to ascertain the actual scale of the enforcement efforts of the company.

Facebook, the largest social media platform of the world also reported earlier this month that its capacity to moderate some content such as that related to suicide and self-injury had become limited because of the company sending its human moderators home, the company said. “With fewer content reviewers, who are essential in our continued efforts to increase enforcement in such sensitive areas, the amount of content we took action on decreased in Q2 from pre-Covid-19 levels,” Facebook said.

Since updating its hate speech policy in June 2019, tens of thousands of QAnon-related videos and hundreds of channels have been removed by it, the YouTube spokesperson also said.

As of late, action on the group has also been similarly taken by Facebook.

Originating about three years ago, QAnon is a group that claims, among other unsupported conspiracy theories, that many politicians and top rated celebrities work together with governments all across world in child abuse. As part of the effort, Facebook has removed hundreds of QAnon groups, pages and advertisements.

(Adapted from

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s