Skip to main content
Legal Advice Centre

Instagram bans graphic self-harm images

The father of Molly Russell, a teenager who took her own life after being exposed to self-harm images shared on Instagram, has argued that the popular social media platform 'helped kill’ his daughter. Instagram have since taken steps to reduce graphic content on their platform.

Published:

By Areti Aikaterini Stefani, Julia Galera and Alexandra Tanase

The tragic impact of social media

Back in 2017, information about Molly Russell’s suicide went global. Molly was allegedly induced by self-harm images viewed on Instagram. Her family also found that she posted material about depression and suicide on the platform. Molly’s father is convinced that Instagram contributed to the girl’s decision to take her own life.

Who’s responsible for the posted content?

Users of the popular social media network might be exposed to accounts showing graphic images relating to self-harm. Some of these images include drastic content or descriptions in which users promise to strengthen their harming in response to how their followers react to the posts. Such posts encourage vulnerable young people to inflict more serious self-harm.

Ged Flynn, Chief Executive of Papyrus, a suicide-prevention charity for children, explains that “the law around suicide is very clear – aiding and abetting and encouraging someone to end their life by suicide is illegal. Anybody who, on or offline, through imagery or words, verbal or written, is at least potentially complicit.”

There are views that cases dealing with self-harm materials should be approached similarly to ones concerning ‘revenge pornography’ and the posting of any other illegal content. This would mean that users who share graphic material, not social media platforms, would be prosecuted.

Such a policy leaves social media companies unaccountable for the damage their platforms can cause. Instead, as reported by the Culture Minister, Margot James, there are plans for legislation requiring social media companies to delete all illegal materials posted on their platforms.

The Children’s Commissioner for England, Anne Longfield, has also urged the Government to put on a statutory footing, the principle that social media platforms have a duty of care in respect to their youngest users. This would oblige them to protect children from the adverse influence of their websites.

Social media platforms’ response

Recently, Instagram has significantly modified its approach to managing self-harm materials. Shortly after the meeting with the Health Secretary, Matt Hancock, who earlier announced that social media platforms might be banned if they fail to remove harmful content, Instagram boss Adam Mosseri in an official blog post admitted that ‘we are not where we need to be on self-harm and suicide, and that we need to do more to keep the most vulnerable people who use Instagram safe’.

The platform, and Facebook (Instagram’s parent company) have already taken some steps to prevent suicide and self-harm. In 2016, the two platforms rolled out a new reporting tool that lets users anonymously flag posts that suggest friends are threatening self-harm or suicide. The act of flagging the post triggers a message from Instagram to the user in question offering support including access to a help line and suggestions such as calling a friend. In 2017, the company added technology to automatically flag posts that might have expressions of suicidal thoughts for human analysis.

It has been argued that these policies were insufficient in safeguarding children, and Instagram. In a statement explaining this change, Mosseri made a distinction between graphic images about self-harm and non-graphic images, such as photos of healed scars. Although the former will be ‘vanished from the face of Earth’, the latter will still be allowed to avoid stigmatising mental health issues. Instagram will make non-graphic images more difficult to find by excluding them from search results, hashtags and recommended content.

The platform is also developing technology to blur remaining self-harm content and put it behind a privacy screen so people do not accidentally find it and view it. The company also plans to increase the help it provides for self-harmers who use Instagram to share their experience.

What’s next?

Although Instagram is working on further improvements, the focus has been directed on other popular social media platforms such as Facebook and Snapchat. It has been argued that they also should be obliged to undertake similar actions in order to effectively eliminate harmful and illegal materials.

They will probably take this responsibility as they want to prevent the British Government from stepping in and imposing bans.

However, it should not be seen only as a way of escaping legal consequences, but as a chance to develop more advanced algorithms ensuring precise verification of potentially harmful content.

If you have been affected by any of the issues discussed in this blog-post, in the UK, Samaritans can be contacted by calling 116 123, or by email at jo@samaritans.org. Other helplines can be found at www.befrienders.org.   

Sources

 

 

Back to top