Grammarly Halts AI-Generated Reviews Linked to Real Authors

Grammarly / PR-ADN
Grammarly has halted its feature that assigned AI-generated reviews to real authors. This decision comes amid concerns about transparency and the ethical implications of attributing machine-produced content to actual individuals within its platform.
TL;DR
- Grammarly’s Expert Review used real experts’ names without consent.
- Outcry led Superhuman to suspend the controversial feature.
- Ethical and legal questions remain around AI and identity use.
An AI Feature Raises Alarm in the Writing Tools Arena
A bold experiment in AI-powered writing assistance has recently stirred deep controversy within the tech community. At the center is Superhuman, a company that introduced an audacious new feature called Expert Review into its popular assistant, Grammarly. This tool promised users personalized feedback on their writing—ostensibly signed by renowned experts, from famous scientists and bestselling authors to celebrated tech bloggers.
The Problematic Use of Real Identities
However, scrutiny quickly followed when it became clear how these ‘expert’ endorsements were generated. Without prior warning or permission, the system automatically attached the names of prominent individuals—sometimes still alive—to user texts, depending on the topic at hand. Whether drafting a scientific article or a fictional narrative, users might see a note purporting to come from a well-known authority beneath their work. The small print clarified that such attributions were “for informational purposes only” and did not imply actual approval or affiliation. Yet, this disclaimer failed to mollify critics.
Authors Push Back: A Digital Outcry
Outrage spread swiftly among living writers and public figures whose identities were appropriated in this manner. Some voiced frustration over what they saw as unauthorized digital impersonation, raising sharp concerns about consent in the era of generative AI. Several factors explain this backlash:
- The absence of direct permission from those named.
- Doubts over the legitimacy of using data from third-party LLMs.
- Ineffectiveness of opt-out options for deceased or inactive individuals.
In response to mounting criticism, Superhuman offered a way for affected experts to remove themselves from the feature—a gesture many considered insufficient.
A Pause—and Lingering Uncertainty for AI Ethics
Ultimately, faced with sustained public pressure and an escalating media storm, CEO Shishir Mehrotra announced on LinkedIn an immediate suspension of Expert Review. While he described the original intention as helping users access “influential perspectives” and giving experts innovative ways to engage with readers, doubts linger about both the ethical and legal boundaries involved.
The episode leaves open pressing questions: How far can artificial intelligence go in simulating human expertise? And where should platforms draw the line between innovation and respect for individual identity? For now, at least, those answers remain elusive as the debate continues.