Meta Fixes Glitch That Bombarded Instagram Users with Violent Videos

Key Takeaways
- On Tuesday, a number of users reported that they were constantly being shown violent reels on Instagram.
- Meta has apologized for the incident and fixed the error within two days of the first report.
- However, we are yet to know the exact cause of the glitch or how many people were affected by it.

Imagine you wake up and instinctively start scrolling Instagram and the first reel you see is of a man getting shot in the head. Not a good start to the day, is it?
This is exactly what happened on Tuesday when Instagram was suddenly filled with gruesome and gore reels. Meta has now fixed the technical glitch that was causing a flood of violent and not-safe-for-work reels in users' feeds. The tech company has also apologized for the inconvenience.
A Little BackgroundWe have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake. - Meta spokesperson
The problem arose on Tuesday when a number of users complained that they were constantly being recommended videos featuring violent content such as someone being beaten or killed. Even those who had the sensitive content filter on were recommended such videos.
The only silver lining is Meta was quick to fix the issue - within 2 days, the glitch was completely fixed. But we are yet to find out what caused this glitch or exactly how many people were affected by it.
Usually, Meta automatically removes such violent content from its platform. However, the company has a history of accidentally promoting harmful posts such as misinformation during the pandemic, violent clips from the Myanmar genocide, eating disorder content to teens, and so on.
Meta's Changing Content PolicyMeta raised a lot of eyebrows in January when the company's policies went under a change and it decided to shift away from third-party fact-checkers and rely on community-driven monitoring (just like Elon Musk's X).
Basically, it's now the responsibility of the users to report content that goes against the platform's guidelines. On top of that, Meta is heavily dependent on automated tools for the processing of such reports.
Previously these tools were instructed to scan all posts that violate the platform's content policy but now Meta wants to give its users more freedom of expression which is why it has instructed the tools to only look for posts that promote illegal activities such as terrorism and human trafficking. Due to this approach, a lot of other offensive posts are making it to the main feed.
Apparently, these changes have been made to align with the new President's views on this matter, who had previously criticized Meta for its stringent monitoring policies. However, as we can see from the Tuesday incident, this approach is clearly not working.
This is a huge concern because Meta is now trying to expand its short-video user base amidst TikTok's troubles with the US authorities. The latter has less than a month remaining to find a non-US buyer or it will be banned in the country.
If the ban does go through, Meta will be the biggest beneficiary as it is the next best option. However, with such a great opportunity comes a great responsibility - to create a safe space for all users, especially for underage users, which Meta hasn't been doing well so far.
The post Meta Fixes Glitch That Bombarded Instagram Users with Violent Videos appeared first on Techreport.