Facebook To Launch A.I. Tool For Detecting Revenge Porn Before It Is Reported

Facebook, which in recent years has been embroiled in a number of controversies related to data breach, security of private information of users and its platform being used for propagating fake news, has announced that it would be launching a new artificial intelligence tool that would enable the largest social media platform in the world to identify and detect revenge porn even before such material is reported.

This would help the victims of such posts and acts from the trauma that they have to otherwise suffer because of the time that would be needed for trial of taking down posts with their intimate pictures which had been being posted without their consent, said Facebook.

This is amongst the latest measures that Facebook has taken to not allow the publication of abusive content on its platform. There has recently been severe criticism of the company over the alleged bad working conditions of contracted content reviewers who are in charge of moderating the posts that are published on the site. In that light therefore, many analysts see this latest measure by Facebook to the step in the right direction as it would not be able to reduce some of the pressure because of those allegations.

“Finding these images goes beyond detecting nudity on our platforms. By using machine learning and artificial intelligence, we can now proactively detect near nude images or videos that are shared without permission on Facebook and Instagram,” Facebook’s Global Head of Safety Antigone Davis said in a blog post.

The functioning of the new AI powered tool is based on its ability to recognize and identify a “nearly nude” photo such as a lingerie shot, along with any form of derogatory or shaming text accompanying the pictures. The AI tool would consider this to be an attempt to by some to upload the photo to embarrass or seek revenge on someone else. After this initial identification, the identified post is them sent to be reviewed by the human reviewer to get it confirmed and subsequently removed.

Facebook said that in most of the cases where this process finds such flagged posts to have been made against the company’s set community standards, the accounts that would be related to the post would be disabled.

The social media company also said that the pilot program that it had previously announced is being expanded by it. This pilot program allows the users of Facebook to preemptively upload and flag images that they fear could be posted as revenge porn. While claiming that there has been positive feedback for the program, Davis said that the company still identifies the program to be an “emergency option” only.

The company is launching a support hub for victims of revenge porn, called “Not Without My Consent,” developed with experts and victims organizations.

(Adapted from CNBC.com)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s