
In these past couple weeks after whistleblower, Frances Haugen, testified before congress regarding how Facebook’s algorithm has been feeding harmful content and misinformation to the masses, the topic of how social media perpetuates eating disorder-related content has reached the mainstream. Internal documents revealed how Instagram’s algorithm has perpetuated content that is normally associated with the more toxic realm of body, weight, and health related material. This has resulted in “proana” (short for pro-anorexia) as well as other disorder eating related content being exposed to users. This has been incredibly problematic for younger demographics whose sense of self and esteem are so vulnerable.
For many people reading this, this is old news. Content that glamorizes eating disorders have been prevalent on social media platforms long before the birth of Facebook and Instagram. Myspace and Tumblr were especially notorious hotbeds for all things “thinspiration” in the early to mid-2010s. As the years went on, tech companies have been more proactive in taking down profiles and posts that included any keywords associated with eating disorders, while subsequently making sure that anyone who searched up these terms was given direct access to helplines and psychiatric support. Facebook has been slammed in the last month with outrage from a public demanding to know why these algorithms would continue to promote content so dangerous to young people. Is it a shameless cash-grab within the weight loss industry? A miscalculation in a technical code? How could they let this continue to happen? Well, while those questions are still valid to ask, its important to note that identifying harmful content is not as simple as it may seem. In a New York Times article, authors Kate Conger, Kellen Browning and Erin Woo referenced an important quote about this topic:
“Social media in general does not cause an eating disorder. However, it can contribute to an eating disorder,” said Chelsea Kronengold, a spokeswoman for the National Eating Disorders Association. “There are certain posts and certain content that may trigger one person and not another person. From the social media platform’s perspective, how do you moderate that gray area content?”
https://www.nytimes.com/2021/10/22/technology/social-media-eating-disorders.html
From an outsider’s perspective, it may be easy to look at one profile and categorize it as “harmful” while viewing another as “health-related”. However, that perspective can differ drastically depending on the individual. There is a plethora of content that was never intended to be viewed as “proana”, but, unfortunately, is worshipped that way. Think models, influencers, or fitness gurus. How is an algorithm meant to understand what reaction a user will gauge? It becomes even more difficult when we look at how many people use social media as a place to tell their story about their eating disorder recovery. One of the most beautiful aspects of the modern age is how we can use these platforms to connect with other people who are struggling and offer them support. Unfortunately, like influencers, accounts meant to promote recovery can also be viewed in a toxic mentality that further perpetuates disordered thinking. Is Instagram supposed to shut down these survivors’ accounts as well? The accounts that really perpetuate these toxic ideologies are often hard for social media to identify; the hashtags will normally be one letter off from the keyword that would get them shut down, while any wording in posts is carefully crafted as to avoid them as well.
Instagram and Facebook have made a lot of progress in taking these accounts down compared to the past. However, these new reports have also exposed the flaws in their system. They are not without fault, but it is important to remember how difficult paroling this type of evading is on a scale of over a billion users. No one (not just girls) should be exposed to accounts that promote EDs, but for those who wish to seek it out, can always find a way to hide in the shadows. It will be interesting to see how Facebook address this situation, and whether or not they will make changes in their technology and AI to more accurately identify the nature of these accounts. Though, they may want to consider that the best course of action, for vulnerable people to truly avoid coming across these triggers to their mental health, is to denounce their platform; to not assume that everything can be fixed from within, and for once, just suggest that their platform is not suitable for some people to use.