Procedures for moderating self-harm and suicide content online are inadequate, study finds

  • 19th February 2025

Online platforms need to improve the ways they moderate self-harm and suicide content, to keep moderators and users safe, according to a study led by researchers from the NIHR Bristol Biomedical Research Centre (BRC) and the University of Bristol. The study is published in Digital Society.

Online platforms and communities have a duty to protect children and adults from harmful content, under the 2023 Online Safety Act. Moderators have the job of making decisions about what content to remove and flag, though platforms also rely on users to report inappropriate content. This job is particularly challenging when it relates to self-harm and suicide content, which affects the mental health of users and moderators.

This study aimed to evaluate user reports of moderation. The findings have been used to produce guidance for online industry leaders and policymakers on moderating online self-harm and suicide content.

Researchers interviewed 14 people aged 16 years and over, representing diverse ethnic backgrounds and age groups, who had engaged with this type of content online. Ten were female and four were male. Eight were interviewed again after three months, and seven again at the end of the six-month study.

Participants reported that:

  • It was difficult to report inappropriate content, particularly when their mental health was poor
  • Inconsistent moderation and unclear communication left them feeling confused and frustrated when their own content was moderated
  • They felt it was important to have moderators with personal experience of self-harm or attempted suicide, but performing this role put these individuals at risk

Based on their findings, the researchers have produced a set of recommendations for online industry leaders to inform their moderation practices.

Online communities and platforms should:

  • Not rely solely on users to report harmful content
  • Provide moderators with training and supervision from mental health professionals
  • Ensure moderators have adequate time and mechanisms to address their mental and physical health needs
  • Have open dialogues with users about their moderation practices

They should also adopt a balanced approach to moderation by:

  • Prioritising users’ safety while also considering the wellbeing of individuals posting content
  • Clearly explaining decisions about moderation to users
  • Providing support to users who have had their content removed or account banned

Dr Zoë Haime, Senior Research Associate at the Bristol BRC, said:

“Policymakers should recognise the challenges online platforms and communities face in moderating mental health spaces, and advocate for strategies that prioritise user safety.

“They should support this study’s recommendations for industry leaders, promoting a transparent and sensitive approach to moderating self-harm and suicide content online.”

This study was supported by the NIHR Bristol BRC. It was based on information shared by participants in interviews performed for the DELVE study funded by Samaritans, UK.