Regulation

Meta Sues Scam Advertisers Using Celeb Deepfakes on Instagram

Meta filed multiple lawsuits against scam advertisers using celebrity deepfakes and cloaking techniques across Facebook and Instagram. The enforcement covers operations in Brazil, China, and Vietnam.

March 3, 2026
13 min read
Meta sues scam advertisers 2026Meta celeb-bait scam lawsuitsMeta deepfake ad protectionInstagram scam ads legal action

On February 26, 2026, Meta announced it has filed multiple lawsuits against deceptive advertisers who used celebrity deepfakes, celeb-bait tactics, and cloaking techniques to defraud users on Facebook and Instagram. The full announcement: Meta Takes Legal Action Against Scam Advertisers.

The lawsuits target operations in Brazil, China, and Vietnam. In Brazil, advertisers used altered images and AI-generated voices of celebrities to promote fraudulent healthcare products. In China, a tech company ran celeb-bait ads as part of a larger investment fraud scheme. In Vietnam, an advertiser used cloaking β€” hiding the true content of landing pages from Meta's ad review systems β€” to run subscription fraud.

Meta also issued cease-and-desist letters to eight former Meta Business Partners who offered "un-ban" services and rented access to trusted accounts. For creators whose images are frequently misused in scam ads, Meta's celeb-bait protection program now covers over 500,000 public figures worldwide.

πŸ’‘ Did You Know?

  • 1Meta's celeb-bait protection program now covers over 500,000 celebrities and public figures globally.
  • 2Meta's AI tools can now detect cloaking β€” where scam ads show one version to reviewers and another to real users.
  • 3Cloaking-based subscription fraud typically charges users recurring credit card fees for products they never receive.
  • 4Meta worked with Longchamp and UK/Nigerian law enforcement to disrupt scam operations, resulting in seven arrests.

The Lawsuits Meta Filed on February 26

Meta's legal action targets four distinct scam operations across three countries. The details come directly from the official announcement: Meta Takes Legal Action Against Scam Advertisers.

Brazil β€” Healthcare fraud with celeb deepfakes. Two separate Brazilian operations used altered images and AI-generated voices of celebrities to promote fraudulent healthcare products. One group, led by Vitor LourenΓ§o de Souza and Milena Luciani Sanchez, created convincing deepfake endorsements. A second operation, run through a company called Brites Corp, used deepfakes of a prominent physician to advertise healthcare products without regulatory approval β€” and then sold courses teaching others how to replicate the scheme.

China β€” Investment fraud via celeb-bait. Shenzhen Yunzheng Technology Co. used celebrity images in ads targeting users in the US and Japan, among other countries. The ads lured victims into joining fake investment groups β€” a classic social engineering fraud that exploits the trust associated with well-known public figures.

Vietnam — Subscription fraud with cloaking. An advertiser named Lý Văn LÒm used cloaking to hide the true nature of landing pages from Meta's ad review system. The visible ads offered deeply discounted items from brands like Longchamp. Users who engaged were asked for credit card information, received nothing, and were charged unauthorized recurring fees.

Meta stated that it has taken technical enforcement actions including suspending payment methods, disabling accounts, blocking domain names, and sharing intelligence with industry partners.

How Meta's Celeb-Bait Protection Works

Celeb-bait is the practice of using a public figure's image β€” sometimes with AI-generated voice or likeness β€” to create ads that appear to be legitimate endorsements. The ads direct users to scam websites that collect personal information or payments for products that never ship.

Meta's defense operates on multiple layers:

Image fingerprinting. Meta developed protections in October 2024 that monitor the images of public figures most frequently targeted by scam advertisers. This program now covers over 500,000 celebrities and public figures worldwide. When a flagged image appears in a new ad, it triggers additional review.

AI-powered ad screening. Meta's review systems use machine learning to identify patterns common to celeb-bait ads: mismatched visual quality, unnatural text overlays, and landing pages that don't match the ad's implied brand.

Cross-platform intelligence sharing. When Meta identifies and blocks scam domains or payment methods, it shares this information with industry partners so they can take action on their platforms as well.

For creators whose images are used in scam ads, the most important step is reporting. Every report improves the training data that powers Meta's detection systems. If you are a creator with a verified account or a public figure, check whether your profile is included in Meta's proactive monitoring.

What Is Cloaking and Why It Threatens Creator Trust

Cloaking is a technique where a webpage displays different content depending on who is viewing it. Scam advertisers use cloaking to show clean, policy-compliant content to Meta's automated ad review systems while showing fraudulent content to real users.

This is especially dangerous because it undermines the entire ad review process. A cloaked ad might pass review as a legitimate product promotion, then redirect real users to a phishing page, a fake checkout, or a subscription trap.

Meta's response has been to deploy AI tools that analyze cloaking behavior β€” essentially teaching their systems to detect when a URL serves different content to different visitors. The February 26 announcement specifically highlights these new AI capabilities as a key enforcement upgrade.

For creators and brands, cloaking creates a trust problem. If users encounter scam ads that mimic real brands or creators, they become less likely to click on legitimate sponsored content. Every scam ad degrades the value of authentic influencer marketing.

This is why Meta's legal action matters beyond the specific defendants. By publicly suing cloaking operators and publicizing the cases, Meta is attempting to deter future scammers and reassure advertisers that the platform is actively defending ad integrity.

To understand the real value of your legitimate sponsorships, use the Sponsored Post ROI Calculator.

Cease-and-Desist Letters to Meta Business Partners

In addition to the lawsuits, Meta issued cease-and-desist letters to eight former Meta Business Partners who were offering abusive services. These services included:

  • Phony "un-ban" services β€” claiming to restore suspended advertiser accounts for a fee
  • Account rental β€” providing access to trusted, established accounts so that scammers could bypass Meta's enforcement systems

These practices are particularly harmful because they exploit the trust-based infrastructure that Meta uses to distinguish legitimate advertisers from bad actors. When a "trusted" account runs scam ads, Meta's systems are slower to catch the abuse because the account has a clean history.

Meta stated it is reviewing its entire Business Partner ecosystem and working on enhanced vetting methods for approving business partnerships in the future.

For creators who work with agencies or media buyers, this is a practical warning: vet your partners. If an agency offers to "fix" a suspended account or provides shortcuts around platform policies, that agency may be operating in exactly the space Meta is now prosecuting. Legitimate partners use legitimate tools.

Check our How to Negotiate Brand Deals guide for frameworks on evaluating potential agency and brand partnerships.

Cross-Border Enforcement and What It Means for the Industry

The geographic scope of Meta's lawsuits β€” Brazil, China, and Vietnam β€” reflects the global nature of online advertising fraud. Scam operations are typically run from jurisdictions where enforcement is difficult, targeting users in high-value markets like the US, Japan, and Europe.

Meta also referenced a recent collaboration with law enforcement in the UK and Nigeria that led to seven arrests related to a scam center. This suggests Meta is investing in law enforcement partnerships as a complement to platform-level enforcement.

For the influencer marketing industry, cross-border legal action sets an important precedent. It signals that platforms are willing to pursue scammers through legal systems, not just block them on the platform. The deterrence value of lawsuits is significantly higher than technical blocks alone, because lawsuits create financial consequences and public accountability.

Brands running international influencer campaigns should factor ad fraud risk into their platform selection. Platforms that invest in cross-border enforcement β€” and publicize the results β€” offer a more trustworthy advertising environment. Meta's February 26 announcement positions it as a leader in this area.

What Creators Should Do to Protect Themselves

Scam ads that misuse creator images are not just a platform problem β€” they are a personal brand problem. When a user sees a fake ad featuring your face promoting a scam product, they may associate you with the fraud, even though you had no involvement.

Here is a concrete protection framework:

1. Claim your intellectual property. Register your name and brand with Meta's Brand Rights Protection program. This gives you tools to report ads that misuse your name, logo, or likeness. See Meta's brand rights protection updates for details.

2. Audit your ad ecosystem. If you run ads yourself, ensure your ad accounts are secure with two-factor authentication and limited admin access. Compromised ad accounts are a common vector for scam operations.

3. Educate your audience. Periodically remind your followers which brands you actually partner with and where to find your real product recommendations. This transparency builds trust and makes it harder for scammers to exploit your credibility.

4. Report aggressively. When you find scam ads using your likeness, report them through Meta's in-app tools and through the Scam Prevention Hub. Each report contributes to better detection.

For a broader look at protecting and growing your creator brand, check our How to Become an Influencer guide.

What this article covers

The specific scam operations Meta is suing and how they operated.

How Meta's AI-powered cloaking detection works and why it matters for ad integrity.

What creators and influencers should do to protect their image from celeb-bait fraud.

Creator protection plan

1Check if you are covered by celeb-bait protection

Meta's program protects over 500,000 public figures. If you are a verified creator or public figure, your images may already be monitored. Report any misuse through Instagram's IP reporting tools.

2Monitor for unauthorized use of your likeness

Set up Google Alerts for your name combined with terms like 'ad,' 'product,' and 'endorsement.' Use reverse image search to check if your photos appear in unauthorized campaigns.

3Report scam ads immediately

Use Meta's in-app reporting and the Scam Prevention Hub to flag any ads misusing your content. Timely reports improve Meta's detection training data.

4Document brand partnerships clearly

Maintain a public record of your actual brand deals so your audience and Meta's systems can distinguish legitimate partnerships from fraudulent use of your image.

Meta Sues Celeb-Bait Scam Advertisers 2026 | GrowInfluencer | Hub for Influencers