CBS: Meta’s platforms host “nudify” deepfake ads.

Share This Post

Meta’s "Nudify" Deepfake Ads: A Closer Look at the Controversy

Meta Platforms, the parent company of Facebook and Instagram, has recently faced scrutiny for allowing hundreds of "nudify" deepfake ads on its platforms. These advertisements promote AI tools that create sexually explicit deepfakes using images of real individuals—a troubling trend that raises crucial questions about consent and online safety.

The Discovery: A CBS News Investigation

A CBS News investigation has revealed that Meta’s platforms had become a breeding ground for these controversial ads. The investigation uncovered an alarming number of promotions on Instagram, particularly within the "Stories" feature, that advertised the ability to "upload a photo" and "see anyone naked." Some ads even cheekily questioned how such content could be permitted on social media.

Examples of Disturbing Ads

One ad caught the attention of many by featuring hyper-sexualized deepfake images of celebrities Scarlett Johansson and Anne Hathaway, exploited to lure users into downloading the associated applications. These apps promised to animate real people’s images in degrading and explicit ways, often charging between $20 and $80 for access to their features. Some redirection links pointed directly to the Apple App Store, showcasing a disturbing trend within the app ecosystem.

Nudify App

Caption: Meta’s platforms have marketed AI tools that let users create sexually explicit images of real people.

Meta’s Response: Action Taken Against Exploitative Content

In response to the CBS News findings, a Meta spokesperson stated, "We have strict rules against non-consensual intimate imagery." Following the investigation, several ads were removed, the associated Pages were deleted, and URLs linked to these apps were permanently banned. However, the sheer volume of "nudify" ads remains a concern, as some persisted even after initial removal.

The Ongoing Challenge

A Meta spokesperson acknowledged that the spread of AI-generated content represents an ongoing problem that requires constant vigilance. "The people behind these exploitative apps constantly evolve their tactics to evade detection," they stated, stressing Meta’s commitment to improving enforcement.

Understanding Deepfakes: A Growing Concern

Deepfakes are manipulated images, audio, or videos that misrepresent individuals, often leading to severe personal and societal consequences. Recent developments in AI technology have made these tools more accessible, raising alarm bells about user safety and consent, particularly among minors.

Legislation in the Works

In a step towards combatting this issue, President Trump recently signed the bipartisan "Take It Down Act," mandating that websites and social media platforms remove deepfake content within 48 hours upon notification. While the law targets the publication of intimate images without consent, it does not specifically address the tools used to create such content.

Platforms Taking a Stand: Meta and Apple’s Policies

Meta’s advertising standards specify that "ads must not contain adult nudity and sexual activity," clearly prohibiting content that promotes deepfake technologies. Similarly, Apple’s app store guidelines reject content deemed offensive or creepy. However, enforcement challenges persist, as highlighted by experts like Alexios Mantzarlis, who argue that both Meta and other platforms are under-regulated when it comes to this issue.

A Call for Action: Cross-Industry Collaboration

Mantzarlis suggests that there needs to be cross-industry cooperation to tackle the deepfake dilemma. "If the app or website markets itself as a tool for nudification, then everyone else in the tech ecosystem must take action," he stated.

User Consent and Safety: A Larger Conversation

The promotion of "nudify" apps raises significant concerns about user consent and the potential dangers posed to minors. An analysis by CBS News found a lack of age verification on websites associated with these ads— a shocking revelation considering the easy accessibility of such content to underage users.

The Alarming Statistics

A study conducted by the nonprofit Thorn revealed that 41% of teenagers had heard of "deepfake nudes," and 10% personally knew someone victimized by such imagery. These statistics underscore the urgent need to fortify online safety measures.

Conclusion: Navigating a Complex Digital Landscape

As Meta and other tech giants grapple with the challenges posed by deepfake technologies, the intersection of AI, user safety, and consent remains a crucial concern. The ongoing dialogue around regulation, platform responsibility, and user education is vital in fostering a safer online environment for all.


For more insights on the implications of AI technologies and online safety, consider checking resources from credible sources like CBS News and Thorn.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Check all Categories of Articles

Do You Want To Boost Your Business?

drop us a line and keep in touch
franetic-agencia-de-marketing-digital-entre-em-contacto