Photo: Evelyn Hockstein/for The Washington Post via Getty Images
In light of National Suicide Prevention Week which began on Sunday September 8th, social media giant Facebook has actively promoted its mission to act as a force for good towards suicide and self-harm. The company announced yesterday that it will no longer be allowing graphic cutting images and other forms of self-harm content on its platform. Additionally, Facebook will also be cracking down on the uploading/sharing of this kind of material on Instagram as well.
The company wants to avoid unintentionally advertising or triggering struggling individuals with this kind of content, and believes it a key step towards a brighter future online. Since 2006, Facebook has made multiple policies and content restrictions in efforts to address the existence of suicide in society as many have claimed social media to be an “echo chamber” for negativity.
Going deeper into content restriction, Facebook and Instagram have been using an AI-based set of suicide prevention tools which was introduced in 2017. Some of these tools include “sensitivity screens” that operate in situations such as blurring out healed self-harm cuts that are posted to avoid encouraging this kind of behavior in online users.
“We’ve also taken steps to address the complex issue of eating-disorder content on our apps by tightening our policy to prohibit additional content that may promote eating disorders” (Antigone Davis). Davis recently revealed in an interview that the company is also in the process of hiring a health and well-being expert to its safety-policy team to focus on supporting the community online.
Although this is an example of large social media companies driving positive change in the online community, I feel as though it is important to remember why some of this negative content is being posted/viewed by groups of people in the first place. It’s almost as if Facebook is trying to repair an issue it essentially created. Suicide and self-harm is a growing problem in our nation and social media has taken a large role in this. While it is important and respectable that these companies are putting efforts towards creating awareness and change online to help others, to what extent should we blame Facebook for these consequences in the first place? Does the blame fall completely upon the user for generating/sharing self-harming content that may trigger others, or the platform itself? Is Facebook’s push to put an end to this negative content enough?