Aaj Logo

Published 02 Dec, 2024 03:23pm

Instagram failing to control self-harm content targeting teenagers

A new study by Danish researchers at Digitalt Ansvar (Digital Accountability) reveals that Instagram, owned by Meta, is failing to adequately moderate self-harm content, potentially contributing to its spread among teenagers.

The researchers created a fake self-harm network on the platform, posting 85 increasingly graphic images and messages over a month.

Despite Meta’s claims of removing 99% of harmful content using AI, not a single image was removed during the experiment.

Digitalt Ansvar’s own AI tool, however, identified a significant portion of the harmful content, suggesting Instagram possesses the technology to address the problem but isn’t using it effectively.

The study further found that Instagram’s algorithm actively promoted the growth of the network, connecting 13-year-old fake profiles with all members of the group. This suggests the platform’s algorithms are inadvertently facilitating the spread of self-harm communities.

Experts, including a psychologist who previously left Meta’s suicide prevention group, expressed alarm at the findings, highlighting the life-threatening consequences of Instagram’s inaction and the platform’s prioritization of engagement over user safety.

The study raises serious concerns about Instagram’s compliance with EU law, specifically the Digital Services Act, which requires platforms to address systemic risks to user well-being.

Read Comments