Menu
24matins.uk
Navigation : 
  • News
    • Business
    • Recipe
    • Sport
  • World
  • Health
  • Culture
  • Tech
    • Science
Currently : 
  • Entertainment
  • Tech
  • Health
  • International

Meta Launches Initiative to Detect AI-Generated Nude Images

Tech
By 24matins.uk,  published 13 June 2025 at 15h24, updated on 13 June 2025 at 15h24.
Tech

Meta is now monitoring AI-generated nude images circulating on its platforms. The company has introduced new measures to identify and track synthetic explicit content, reflecting growing concerns about the spread and impact of artificial intelligence in online spaces.

Tl;dr

  • Meta targets AI « nudify » apps and deepfake scams.
  • Advanced tools detect illicit ads and share alerts with partners.
  • Legal action launched against major offenders like Crush AI.
  • Meta Responds to Escalating Deepfake Threats

    Amid mounting concerns over the misuse of artificial intelligence on social media, Meta has taken decisive steps to address the proliferation of explicit « nudify » applications and sophisticated deepfake scams. For months, researchers and journalists have sounded alarms about these tools, which exploit generative AI to create non-consensual images — often targeting high-profile individuals such as celebrities or influencers.

    Legal Action Against Persistent Offenders

    A significant development unfolded recently as Meta filed a lawsuit against Joy Timeline HK Limited, the Hong Kong-based company behind the notorious application Crush AI. This particular app gained notoriety for its widespread presence, reportedly running more than 8,000 advertisements across both Facebook and Instagram since last autumn. According to Alexios Mantzarlis, who directs the Security, Trust and Safety Initiative at Cornell Tech, such pervasive campaigns had become a central focus in debates around platform responsibility.

    Tackling Illicit Content Through Technology and Partnerships

    However, legal avenues form only part of the response. In an official statement, the parent company of Facebook detailed new technological advances designed to proactively identify illegal advertising — even when those ads do not contain overt nudity. Enhanced algorithms now scrutinize a broader range of sensitive signals: keywords, specific phrases, and even emojis are factored into detection systems. This refinement aims to intercept illicit content before it gains traction.

    Moreover, recognizing that isolated efforts are insufficient, Meta has committed to closer collaboration with external experts and tech platforms. Information-sharing with app store managers is set to intensify, in hopes of curbing the reach of malicious actors exploiting social media networks.

    Pursuing Stronger Moderation Amid Growing Risks

    Still, challenges persist beyond « nudify » apps alone. The rise of generative AI-powered scams — particularly video deepfakes featuring public figures — poses fresh risks for digital trust. Recently, Meta’s independent Oversight Board reminded the company that its existing moderation rules must be rigorously enforced: « Too often these prohibitions have not been fully applied. »

    To strengthen its stance against abuse, Meta is prioritizing several key actions:

  • Pursuing legal action against repeat offenders;
  • Deploying advanced filtering technologies;
  • Deepening engagement with outside experts and digital platforms.
  • Ultimately, by tightening its policies and technology around AI-generated threats, Meta aims to reaffirm its commitment to safeguarding digital identities — especially for those most vulnerable to exploitation online.

    Le Récap
    • Tl;dr
    • Meta Responds to Escalating Deepfake Threats
    • Legal Action Against Persistent Offenders
    • Tackling Illicit Content Through Technology and Partnerships
    • Pursuing Stronger Moderation Amid Growing Risks
    • About Us
    © 2026 - All rights reserved on 24matins.uk site content