We open Instagram for entertainment, but it recently gave us something darker.

Fez– Late Wednesday night, Instagram feeds took a disturbing turn. Out of nowhere, users scrolling through Reels were hit with a wave of horrifying content, graphic violence, explicit imagery, and deeply unsettling videos no one asked for. 

Blood, brutality, and bodies filled feeds that were supposed to be harmless escapes. The chaos wasn’t just a glitch. It was a failure, one Meta now admits was their mistake.

Meta, Instagram’s parent company, publicly apologized on Thursday after the platform’s recommendation system started pushing violent and graphic content onto users’ Reels pages. 

In a statement shared with CNBC, a Meta spokesperson called it an “error” and promised that the problem had been fixed. But for many users, the damage was already done.

All over social media, people described the shock of opening Instagram expecting fashion, memes, and cooking hacks, only to see videos of dismembered bodies, people bleeding out, and animals suffering in ways no one should witness. 

Even users who had Instagram’s Sensitive Content Control set to its strictest level reported seeing this flood of horrifying content from the dark web.

It wasn’t just one or two posts slipping through the cracks. It was a full-blown breakdown of Instagram’s content filters,  a system that’s supposed to shield users from exactly this type of trauma.

Meta’s official content policy bans extreme violence and gore, including footage of dismemberment, burned bodies, and human suffering. But there’s always been a gray zone: some graphic content is allowed if it’s meant to raise awareness of serious issues, like war crimes or human rights abuses. 

These posts usually come with warning labels, giving viewers a choice before they see anything disturbing.

But on Wednesday night, there were no warnings, just a flood of horror, front and center on everyone’s Reels.

Meta claims they use AI tools and a team of 15,000 content moderators to catch and remove this type of content before it spreads. Clearly, that system failed. And it failed at a time when Meta is actively weakening its content moderation rules.

In January, Meta announced it would pull back on proactive censorship. Instead of blocking all potential violations, Meta’s AI would focus only on the most severe cases, like terrorism, child exploitation, and drug trafficking. For lower-level issues, the company would wait for users to report content themselves before taking action.

That shift, paired with mass layoffs that gutted Meta’s trust and safety teams, left Instagram’s safeguards hanging by a thread.

Now, that thread has snapped, and users were left holding the weight of what Meta let through. A fix may be in place, but for the people who saw those images, the damage is already done.

Read also: Drake Vows to Cover Fan’s Mother’s Cancer Treatment at Australia Concert