Meta Sues Creator of Explicit Deepfake App for Violating AI “Nudity” Ad Policies

Meta Sues Creator of Explicit Deepfake App
Meta Sues Creator of Explicit Deepfake App

Meta Platforms, the parent company of Facebook and Instagram, has officially filed a lawsuit against the developer of an explicit deepfake app called CrushAI for violating advertising policies and misusing artificial intelligence (AI) technology. The lawsuit was filed in a Hong Kong court against Joy Timeline HK Ltd., which is accused of running tens of thousands of sexually suggestive ads illegally on Meta’s platforms.

AI-Based Nudity Ads Flood Facebook

Since February 2024, the developer of CrushAI allegedly launched over 87,000 sexually explicit ads on Facebook and Instagram. The app uses “nudification” technology — AI that alters a person’s photo to make them appear naked.

These ads were distributed through more than 170 business accounts and 135 Facebook pages, targeting users in the United States, Canada, Germany, Australia, and the UK. Some examples included disturbing calls-to-action like “upload anyone’s photo to see them naked instantly.”

Violating Meta’s Policies and Threatening Privacy

Meta stated that the app blatantly violates its Terms of Service, which strictly prohibit the distribution of non-consensual sexual content and the use of AI for pornographic purposes. According to court documents, these ads not only breach ethical standards but also endanger individual privacy and safety, particularly for women and minors.

Deepfake technology like that used by CrushAI has become increasingly accessible. Apps like this allow anyone to edit someone’s face or body into sexualized images and spread them—without consent, and without limits.

Meta Builds AI Detection for “Nudity” Ads

In response, Meta said it has removed thousands of related ads, shut down the illegal business pages, and blocked over 3,800 URLs linked to nudification apps. The company is also developing a new machine learning-based detection system to spot harmful ads—even those that don’t show explicit nudity.

This new tech can detect keywords, phrases, and emojis commonly used by explicit app advertisers. Meta is also working with other tech companies through the Tech Coalition’s Lantern Program, sharing information to help dismantle distribution networks of harmful apps.

A Global Concern Beyond Just Ads

The rise of sexually exploitative deepfake apps is more than an ad problem—it’s a global social issue. Studies show that over 95% of deepfake content online is non-consensual pornography, with women being the primary targets. Similar cases have reached schools, where students have created “nude” AI versions of classmates.

Amid growing misuse of AI for harmful purposes, the U.S. government has passed the Take It Down Act, requiring digital platforms to remove intimate content without consent within 48 hours of a report. Meta supports this policy and is using it as a legal framework in its lawsuit against CrushAI.

Meta Takes a Firm Stand

In its official statement, Meta emphasized that it will not hesitate to take legal action against anyone violating its ad policies, especially those using AI to promote harmful content. After spending over USD 289,000 on investigations, Meta says this lawsuit is part of its broader commitment to protect users from digital exploitation.

Conclusion: AI Must Not Become a Tool for Digital Abuse

This case is a strong warning that AI should not become a weapon of digital abuse. Without strict regulations and strong enforcement, apps like CrushAI can easily spread and destroy the reputations and lives of victims.Meta’s lawsuit against the creators of the explicit deepfake app sends a clear message that major tech companies are beginning to take greater responsibility in controlling AI’s harmful impacts, especially when it comes to people’s bodies, privacy, and dignity.