Meta apologises for graphic content surge on Instagram reels, fixes error
Meta has issued an apology after users reported seeing a surge of violent and graphic content on their Instagram Reels pages.
The company confirmed on Thursday that it had corrected an “error” responsible for the “inappropriate content recommendations,” which surfaced despite users having sensitive content controls set to their highest level.
A Meta spokesperson expressed regret for the incident, stating, “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologise for the mistake.”
The issue sparked concern among Instagram users, who took to social media to share their frustration over disturbing content, including graphic violence, appearing in their feeds. Some users claimed they encountered such posts even with Instagram’s “Sensitive Content Control” set to the most stringent moderation settings.
Meta’s policies aim to protect users from harmful imagery, removing content that includes graphic violence, such as dismemberment or disturbing depictions of suffering.
While the company generally bans such content, it does allow some graphic material to raise awareness about critical issues like human rights abuses or terrorism, provided it is flagged with warning labels.
Meta uses a combination of artificial intelligence, machine learning, and over 15,000 human reviewers to detect and remove disturbing imagery before it reaches users.
The company also aims to prevent the recommendation of inappropriate content, especially that which might be unsuitable for younger audiences.
Read more
Meta blames bug for adding ‘terrorist’ to Palestinian user profiles
The incident comes on the heels of Meta’s recent shift in its content moderation approach. The company announced in January that it would adjust its automated systems to focus on high-priority violations like terrorism and child exploitation while relying more on user reports for less severe violations.
Meta also revealed plans to reduce the unnecessary demotion of content based on predicted violations and to allow more political content, which has raised questions about its ties with political figures, including President Donald Trump.
The policy changes followed significant layoffs in 2022 and 2023, which affected Meta’s civic integrity and trust and safety teams.
For the latest news, follow us on Twitter @Aaj_Urdu. We are also on Facebook, Instagram and YouTube.
Comments are closed on this story.