Grammarly Faces Backlash for AI Feature Using Dead Academics Without Consent
The writing platform's Expert Review tool simulates feedback from scholars including those recently deceased, igniting debate over digital identity rights and posthumous data ethics.
Grammarly is under fire from the academic community after users discovered its Expert Review feature generates AI-powered writing feedback under the names of real scholars—including historians who died less than two months ago.
The controversy erupted on March 1 when Cybernews reported that German historian Verena Krebs flagged the feature after it suggested David Abulafia, a British historian who NewsGhana notes died in January 2026, as an available reviewer. Grammarly launched the feature in August 2025 as part of its parent company Superhuman’s AI productivity suite. The tool analyzes manuscripts and generates editorial suggestions framed as coming from specific academics, journalists, and subject-matter experts—drawing on their published work and public profiles to simulate their critical voice.
“Without anyone’s explicit permission it’s creating little LLMs based on their scraped work and using their names and reputations. Obscene.”
— Vanessa Heggie, Associate Professor, University of Birmingham
How the Feature Works
According to Futurism, Expert Review operates through Grammarly’s browser extension. Users select an expert from a list, and the AI generates feedback based on that scholar’s field or published work. The system can even automatically rewrite passages based on its suggestions. A Superhuman spokesperson told Decrypt that the tool “leverages our underlying LLM to surface expert content that can help the document’s author shape their work,” with suggested experts depending on the writing’s subject matter.
The technical implementation appears to use persona prompting—a technique that feeds AI models detailed character profiles rather than training separate models for each expert. According to Cybernews, this means Grammarly provides its AI with descriptions of experts’ characteristics and published work to generate feedback in their style. The company’s support page includes a disclaimer stating that “references to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.”
August 2025
October 2025
5 months
Less than 2 months
Academic Outrage and Ethical Concerns
The response from academics has been swift and scathing. Technology.org reports that Claire E. Aubin, a historian, called it “among the most cursed” developments she’s witnessed in academia. Kathleen Alves, an associate professor at CUNY, described it as “literally digital necromancy” in posts on Bluesky that rapidly went viral across academic social networks.
The ethical breach extends beyond the deceased. Living academics featured in the system—including journalists from The Verge, The New York Times, and The Atlantic—report they were never contacted for permission. Nilay Patel, editor-in-chief of The Verge, wrote on Bluesky that he was “almost more offended by the suggestion that I would give this shitbox edit than having my identity stolen.” He noted there’s “literally no possible way to know what an editor is like as an editor by reading published written work by that person, which often goes through… other editors.”
Grammarly isn’t alone in appropriating identity for AI. Meta launched celebrity-based chatbots in 2023 featuring Snoop Dogg and Tom Brady, while Khan Academy’s Khanmigo allows students to role-play with historical figures like Winston Churchill. The difference: those involved living public figures who could consent—or were already deceased historical figures beyond copyright reach. Grammarly’s approach occupies an ethically ambiguous middle ground, using recently deceased individuals whose estates and families may have claims.
The Legal and Regulatory Vacuum
The controversy exposes a critical gap in data protection law. According to research published in arXiv, current privacy regulations including GDPR and CCPA protect living users but largely exclude deceased individuals from data protections. The EU AI Act’s definition of deepfakes applies to content “that resembles existing persons”—which technically doesn’t cover the deceased.
In most jurisdictions, MDPI notes, data controllers can process deceased persons’ data for AI training without data protection guarantees. Only 12 EU member states have introduced national legislation protecting deceased individuals’ personal data. In the United States, certain states like California and Indiana extend “right of publicity” protections up to 100 years after death—but these laws were designed for commercial contexts like licensing celebrity images, not for AI-generated educational or expressive content.
- GDPR and CCPA explicitly protect only living individuals, leaving post-mortem data unregulated
- EU AI Act definitions of deepfakes and synthetic media don’t clearly cover deceased individuals
- Right of publicity laws in select US states offer limited protection focused on commercial exploitation
- No comprehensive framework exists for “digital wills” governing posthumous use of voice, likeness, and writings
- Platform policies vary wildly—companies set their own standards for handling deceased users’ data
Broader Implications for the AI Industry
Grammarly’s Expert Review is not just an AI grader—it also offers an “AI grader agent” that, according to Futurism, predicts how specific professors might evaluate student work by scraping “publicly available instructor information.” This creates a parallel concern: the appropriation of living educators’ assessment styles without consent, potentially undermining their pedagogical authority.
The incident highlights what Technology.org describes as a fundamental tension: large language models routinely train on massive datasets without asking authors first, but attaching real names to algorithmic output crosses a different ethical line. It’s one thing to absorb published work into a training set; it’s another to present machine-generated advice as personalized expertise from a named individual who never consented—or who can no longer object.
Research from Japan cited in AI & Society found that approximately 20% of respondents would allow commercial use of their personal data after death if compensated during their lifetime—suggesting most people want control over their digital afterlife. Yet no such mechanisms exist in current platform architectures.
What to Watch
Grammarly CEO is scheduled to appear on The Verge‘s Decoder podcast, where the company will likely face direct questions about consent mechanisms and whether it will offer opt-out systems for living scholars or takedown processes for estates of the deceased. The company has not issued a public statement beyond its support page disclaimer at the time of reporting.
The broader regulatory response will determine whether this becomes an isolated controversy or a catalyst for change. The European Commission is currently drafting implementation guidelines for the EU AI Act—expect pressure to clarify whether posthumous identity appropriation constitutes a prohibited practice. In the United States, state legislatures in California, Virginia, and Massachusetts are considering AI bills of rights that could establish baseline protections for Digital Identity, including post-mortem provisions.
Universities are scrambling to update AI policies. Most institutions currently permit AI assistance for grammar and editing while prohibiting full content generation—but Expert Review blurs that line by offering substantive editorial feedback under the authority of named experts. Academic Integrity offices will need to decide whether citing AI-generated feedback “in the style of” a scholar constitutes proper attribution or academic misconduct.
For AI companies, the message is clear: shipping features faster than policy can adapt creates reputational and legal risk. The “move fast and break things” ethos breaks down when what you’re breaking is the boundary between a person’s life work and a software product’s raw material.