20/07/2025
20/07/2025

NEW YORK, July 20: In a sweeping effort to bolster platform integrity, Meta has confirmed the deletion of approximately 10 million Facebook accounts in the first half of 2025, targeting impersonators and spammy profiles in what some are calling the company’s largest account purge to date.
The move comes amid a broader trend of major tech platforms tightening account management, with Google and Samsung previously issuing deletion warnings for inactive users. However, Facebook’s actions differ significantly — targeting active accounts linked to impersonation, fake engagement, and spam.
A July 14 announcement on the Facebook Creators blog detailed the rationale behind the mass removals, stating:
“We believe that creators should be celebrated for their unique voices and perspectives, not drowned out by copycats and impersonators.”
The announcement referenced ongoing threats across digital platforms, including impersonation scams, which have affected users across services like Amazon Prime. Meta said its crackdown is part of a commitment to ensure original content is visible and rewarded.
Since January, Meta said it has removed approximately 10 million fake accounts impersonating popular content creators. An additional 500,000 accounts engaged in spam or fake engagement saw their reach reduced, comments demoted, and monetization suspended.
“Facebook aims to be a place where original content thrives,” the company emphasized. “We will continue taking action to protect creators and the broader community.”
Following Meta’s blog post, social media lit up with complaints from users alleging that their accounts were wrongly deleted.
“I strongly believe this purge, while framed as a safety measure, is sweeping up innocent people and branding them as criminals without recourse or transparency,” one user wrote.
Another advised searching terms like ‘Meta ban wave’ on Reddit, TikTok, or Twitter, pointing to hundreds of reports of legitimate Facebook and Instagram accounts being disabled without warning.
While Meta has acknowledged a “technical error” affecting Facebook Groups, it maintains there is no evidence of widespread erroneous enforcement across its platforms. A Meta spokesperson stated:
“We take action on accounts that violate our policies, and people can appeal if they think we’ve made a mistake.”
The company confirmed that AI is used in the content moderation process, but rejected claims that artificial intelligence was wrongly flagging large volumes of user accounts.
For many users, particularly creators and public figures vulnerable to impersonation, the purge has been seen as a positive move toward a safer, more trustworthy online environment. However, the growing number of complaints from regular users caught in the crossfire is prompting questions about transparency and the limits of automated enforcement.
As social media platforms evolve to combat malicious behavior, the challenge remains: how to protect legitimate users without alienating them in the process.