OpenAI Faces Legal Challenges Over Image Rights and Copyright

OpenAI / PR-ADN
OpenAI is at the center of an ongoing discussion regarding image rights, as questions arise about how its technology uses and processes visual content, sparking debate among legal experts, privacy advocates, and industry observers worldwide.
TL;DR
- Sora 2 allows AI videos of real people by default.
- Public backlash over misuse of celebrity images intensifies.
- OpenAI promises tighter controls amid legal, ethical concerns.
Industry Reaction: Celebrities Demand Respect for Image Rights
The controversy surrounding OpenAI‘s latest release, Sora 2, has escalated rapidly since its launch. Prominent figures from the entertainment industry have voiced their disapproval, not least Bryan Cranston, who, with backing from the union SAG-AFTRA, publicly condemned the unauthorized use of his likeness by Sora 2—despite explicit requests to opt out. Expressing grave concern for fellow artists, Cranston urged all companies in the sector to honor individuals’ control over their own voices and images.
A Risky Launch Spurs Immediate Backlash
Weeks ago, OpenAI introduced Sora 2, an AI-driven video generator that controversially permitted users to depict real individuals unless they had explicitly objected. This “opt-out” framework quickly led to a deluge of problematic content, including misappropriated representations of high-profile personalities such as Martin Luther King Jr., John F. Kennedy, and Stephen Hawking. After inappropriate videos circulated, OpenAI issued apologies—most notably to King’s family—and took steps to block such content. Nonetheless, similar creations featuring public figures continued to surface elsewhere.
The Policy Divide: Sora 2 Versus Its Competitors
Unlike rivals such as Google Gemini (Veo 3), which enforce stricter preemptive controls to safeguard against celebrity exploitation, Sora 2’s default settings place the burden on individuals to exclude themselves. The earlier version of Sora was comparatively more restrictive in this regard. Several factors explain this decision:
- The ambition to stand out in a crowded generative AI market;
- A belief in user empowerment, albeit controversial;
- A desire to test regulatory boundaries amid rapid technological evolution.
However, mounting demands for stronger oversight and greater respect for image rights have forced OpenAI to reconsider its approach.
Toward Stricter Regulation Amid Ongoing Uncertainty
In response to criticism, CEO Sam Altman pledged revisions so that only those granting clear consent would appear in generated content—though specific safeguards remain vague. The company’s public endorsement of the US “NO FAKES Act,” designed to shield voices and likenesses from unauthorized reproduction, further signals a shift toward more robust self-regulation.
Yet questions linger: Will emerging laws prove adequate? As scrutiny intensifies and ethical debates rage on both sides of the Atlantic, one thing is certain—public-facing AI technologies like Sora 2 are under more pressure than ever to balance innovation with responsibility.