Menu
24matins.uk
Navigation : 
  • News
    • Business
    • Recipe
    • Sport
  • World
  • Health
  • Culture
  • Tech
    • Science
Currently : 
  • Entertainment
  • Tech
  • Health
  • International

Facebook Eases Censorship, Risky Content Sees Uptick

Tech
By 24matins.uk,  published 2 June 2025 at 21h25, updated on 2 June 2025 at 21h25.
Tech

Facebook has eased its moderation policies, leading to a noticeable rise in the circulation of potentially harmful content on its platform. This change has sparked concerns among experts and users about the possible impact on online safety and misinformation.

Tl;dr

  • Meta eases moderation, but sensitive content rises.
  • Content removals drop; youth safety remains priority.
  • AI tools expand, fact-checking model shifts in U.S.

Shifting Moderation Strategies at Facebook

Following a significant policy shift led by Mark Zuckerberg, Meta has adopted a more relaxed stance on hate speech and related content across its platforms. While the company highlights a notable decrease in rule enforcement errors, this change comes amid increasing concerns over sensitive material surfacing on Facebook. The most recent quarterly report offers a closer look at the tangible effects—and underlying complexities—of these new moderation guidelines.

Rising Trends in Sensitive Content

Interestingly, despite Meta’s optimism about its revamped approach, data from the first quarter of 2025 reveal that the share of violent or graphic posts on Facebook has crept upward. Where such content previously accounted for about 0.06% to 0.07% of all posts at the end of 2024, it now stands at 0.09%. Similarly, instances of harassment and intimidation have seen a slight uptick to between 0.07% and 0.08%. Though these percentages may appear marginal against the backdrop of billions of daily interactions, their steady rise is far from trivial. Notably, policy rewrites have also loosened restrictions on certain comments targeting migrants and LGBTQ individuals—a move that has not gone unnoticed by critics.

Reduced Removals, Heightened Youth Safeguards

Amid these adjustments, there’s been a sharp decline in overall content removals: only 3.4 million pieces were « actioned for hate speech », marking the lowest figure since 2018 for Meta. This pattern extends to other categories as well:

  • Spam: just 366 million deletions compared to 730 million previously.
  • Fake accounts: down from 1.4 billion to around 1 billion.

Meta attributes this reduction to an emphasis on targeting only the gravest violations—such as child exploitation and terrorism—in order to minimize wrongful takedowns. However, the company maintains that protecting minors remains central: « we remain committed to ensuring young users have the safest possible experience ». Newly implemented « teen » account settings are designed to proactively screen out harmful content for adolescents.

The Rise of AI and New Verification Approaches

Another noteworthy development is the integration of advanced large language models (LLMs), which Meta claims now outperform humans in several moderation tasks. The company is also phasing out U.S.-based partnerships with traditional fact-checkers in favor of a participatory system reminiscent of « Community Notes ». Whether this experiment will prove effective remains to be seen; Meta promises forthcoming public evaluations.

Ultimately, while Meta touts increased efficiency and fewer mistakes with its lighter-touch approach and technological upgrades, lingering questions persist about the platform’s ability to manage the growing tide of sensitive content on Facebook.

Le Récap
  • Tl;dr
  • Shifting Moderation Strategies at Facebook
  • Rising Trends in Sensitive Content
  • Reduced Removals, Heightened Youth Safeguards
  • The Rise of AI and New Verification Approaches
  • About Us
© 2026 - All rights reserved on 24matins.uk site content