Facebook pledged more transparency as it announced that it in the first three months of 2018, it has deleted over half a billion fake accounts and violent or obscene content that ran into millions, the company said on Tuesday.
The herculean task that Facebook faces being the largest social media network in the world was reveled in the rare glimpse that the company provided about its internal moderation figures. The task is being carried out by thousands of human moderators and complicated artificial-intelligence systems.
“My top priorities this year are keeping people safe and developing new ways for our community to participate in governance and holding us accountable,” wrote Facebook chief executive Mark Zuckerberg in a post adding: “We have a lot more work to do.”
In the first three month this year, the social media deleted about 21 million posts related to sex or nudity, 2.5 million comments and posts related to hate speech and nearly 2 million content identified to be material related to terrorism by al-Qaeda and the Islamic State in addition to the fake accounts, the company said in the transparency report.
Despite the huge amount of removals, the average users of Facebook would not probably notice the change. Roughly about 8 out of every 10,000 views on the platform were removed for the reasons mentioned earlier. That figure was 7 views in 10,000 views by the end of last year.
It was about two months ago that the social media company had published the internal rules that it follows for reviewing and deciding on removal of content and now Facebook wants it this new report to be updated twice a year. While at present the company takes help from 10,000 human moderators for removal of objectionable content, it plans to double the number by the end of the year.
The investments that the company had made earlier to identify and help in removal of objectionable content is providing good results. Monitoring software was the first to raise the red flag in the case of identification of content that featured sex, nudity or terrorism-related content and the rate of accuracy the was 96 percent. Humans reported such cases after the software, Facebook said.
On the other hand, there had been a decrease in the deletion of the number of fake accounts by Facebook. In the first quarter of the year, the social media deleted 583 million fake accounts. This was about 100 million less than the number of fake accounts that were removed from the platform in the last quarter of 2017. This decrease was attributed to a decline to the “variability of our detection technology’s ability to find and flag” fakes.
(Adapted from WashingtonPost.com)