A.I. Is Being Used By Facebook To Aid In Preventing Suicides

Detection of suicidal tendencies, thoughts and feelings is being attempted to be identified with the help of artificial intelligence (AI) by Facebook.

This is being done as part of the social media giant’s continuing attempts to “help build a safe community on and off Facebook.”

Any form of posts or live videos on Facebook where users express suicidal thoughts would be detected with the help of pattern recognition which will be part of its efforts to make use of a “proactive detection efforts” for aiding people who are exuding thoughts about suicide on the platform, said the U.S. based company in an official blogpost on Monday.

Facebook said that it is pattern recognition is also being used to help respond to reports faster, enhancing the manner in which Facebook recognizes suitable first responders and using more reviewers from the platform’s Community Operations team for conducting review of reports of suicide and self-harm. The reviewers would include a team that possess specific mental health training.

In recent times there have been several incidents where Facebook Live has been used to broadcast several high-profile suicides and suicide attempts and this attempt of Facebook evidently follows such incidents. Either expression of suicidal thoughts or livestreaming of suicide has been done on several other social media platforms in the recent past.

While Facebook had earlier appealed to users to report any suicidal content they find, the use of Ai for detection of Suicidal tendencies takes that effort to a new level.

Commenting on the work, Guy Rosen, Facebook’s vice president of product management, said that “when someone is expressing thoughts of suicide, it’s important to get them help as quickly as possible.

“Facebook is a place where friends and family are already connected and we are able to help connect a person in distress with people who can support them. It’s part of our ongoing effort to help build a safe community on and off Facebook.”

The company was “starting to roll out artificial intelligence outside the U.S. to help identify when someone might be expressing thoughts of suicide, including on Facebook Live”, Rosen said.

“This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide. We continue to work on this technology to increase accuracy and avoid false positives before our team reviews.”

He said that baring EU, this would be available worldwide eventually.

Rosen said that strong indicators can be phrases in comments such as “Are you OK?” and “Can I help?” as the company makes use of signals like the content of posts and such comments below. It said that videos have been identified by technologies that otherwise might have bene overlooked in some occasions, Facebook has found.

Rosen said that Facebook has already employed devoted teams functioning 24/7 all across the globe “who review reports (of concerns of suicide and self-harm) that come in and prioritize the most serious reports” and adds that the most recent developments are linked to Facebook’s “ongoing commitment to suicide prevention.”

(Adapted from CNBC)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s