What Happens to Your Digital Identity After Death? The Legal Void AI Is Exploiting
As AI systems train on the data, writing, and likenesses of the deceased without consent, the law offers almost no protection - and estates have few tools to fight back.
When Grammarly launched a feature in March 2026 offering AI-generated manuscript reviews from named academics – including historian David Abulafia, who died in January – the backlash was immediate. Scholars called it “digital necromancy.” But the controversy exposes a deeper problem: the legal frameworks governing identity, privacy, and intellectual property were designed for the living, and AI systems are now routinely exploiting the gap left by death.
The collision between AI training datasets and postmortem rights is accelerating. Companies scrape decades of published scholarship, social media posts, voice recordings, and photographic archives to build language models and synthetic personas. Most of this happens without asking estates, and in many cases, without legal consequences. The result is a growing class of disputes where families discover their loved ones’ work, voice, or face being monetised or simulated – often years after death – with no clear path to object.
Three Rights, Three Gaps
Digital Identity after death sits at the intersection of three legal regimes, none of which was designed for AI-driven replication. Understanding the distinction is essential.
Copyright protects the expression of ideas in fixed form – books, articles, photographs, recordings. According to the US Copyright Office, copyright lasts for the life of the author plus 70 years. This means published works remain under estate control for decades. But copyright does not protect facts, ideas, or style. An AI trained on a scholar’s entire body of work does not necessarily infringe copyright if it generates new text “in the style of” that scholar without copying verbatim passages. The Grammarly case illustrates the tension: the company likely did not reproduce copyrighted text, but it attached real names to algorithmic outputs derived from that text.
Personality rights (or “right of publicity”) govern the commercial use of name, voice, image, and likeness. These rights vary dramatically by jurisdiction. The American Bar Association notes that only about half of US states recognise postmortem publicity rights, and the duration ranges from 10 years in Tennessee (extendable) to 100 years in Indiana. California provides 70 years of protection, but only if the deceased had commercial value at death. New York’s postmortem right, enacted in 2021, lasts 40 years and applies only to individuals domiciled in the state at death. States like Minnesota and Wisconsin provide no postmortem right at all – the right dies with the person.
The patchwork creates absurd outcomes. A celebrity who dies in California enjoys decades of protection; the same person dying in New York gets 40 years, and in Minnesota, zero. AI companies can jurisdiction-shop, training models in states with weak protections and deploying them nationally.
Data Protection law offers the least help. The Irish Data Protection Commissioner confirms that the EU’s General Data Protection Regulation explicitly excludes deceased persons. Recital 27 of the GDPR states: “This Regulation does not apply to the personal data of deceased persons.” Member states may create their own rules – France, Italy, and Bulgaria have done so – but most have not. In the US, there is no federal postmortem data protection law. HIPAA protects health information for 50 years after death, but that’s an outlier.
Privacy law in most jurisdictions is rooted in the idea that only living individuals can be harmed by privacy violations. Courts have consistently held that the dead cannot suffer distress, reputational damage, or loss of autonomy. This “no-privacy-rights-for-the-dead” doctrine dates back more than a century. As Michigan Law Review documents, the principle has been repeated since the late 1800s, even as technology has made it increasingly untenable. The logic collapses when AI can simulate a deceased person’s voice, opinions, and personality indefinitely.
Where AI Finds the Data
AI training datasets draw from multiple sources, many of which contain information about the deceased. Biographical databases are a primary target. According to research from AI & Society, genealogy platforms like Ancestry and MyHeritage have digitised millions of obituaries, death records, and historical documents, using machine learning to extract names, relationships, and biographical details. These datasets are often publicly accessible or sold to third parties. Wikipedia’s biography dataset alone contains over 728,000 entries, widely used in natural language processing research.
Social media platforms present a more complex case. Facebook, Instagram, and Twitter accounts persist after death unless memorialised or deleted. Platform terms of service vary: some allow designated legacy contacts to manage accounts; others do not. But none prevent the platform itself from using posthumous data for training. Meta has patented technology to continue posting on behalf of deceased users, according to reports from the OECD AI Incident Database. While the patent has not been deployed, it signals the industry’s direction.
Academic publishers and institutional repositories are another vector. Decades of journal articles, conference papers, and dissertations are indexed and searchable. OpenAI, Anthropic, and Google have all acknowledged scraping academic databases to train large language models. Authors and their estates typically have no visibility into this process and no way to opt out retroactively.
“Without anyone’s explicit permission it’s creating little LLMs based on their scraped work and using their names and reputation.”
– Vanessa Heggie, Associate Professor, University of Birmingham
The Grammarly Case and Its Implications
Grammarly’s “Expert Review” tool allows users to select a named scholar from a list and receive AI-generated feedback modelled on that person’s research and writing style. The tool launched in 2025, but public attention only focused on it in March 2026 when medieval historian Verena Krebs posted a screenshot showing David Abulafia – who died in January – as an available expert. As reported by The Chronicle of Higher Education, many living scholars listed in the tool were unaware of their inclusion and disputed the accuracy of the generated feedback.
The case is legally instructive. Grammarly likely did not commit copyright infringement in the traditional sense – it did not reproduce protected text. It may not have violated publicity rights in most states, since academic reputation is not always “commercial value” under the law, and many of the scholars are domiciled in states with no postmortem right. Data protection law does not apply to the deceased in most jurisdictions. The controversy hinges on ethical norms rather than enforceable legal rights: the idea that using someone’s name and professional identity without consent is wrong, even if technically lawful.
This is the regulatory gap AI companies are exploiting. Existing law protects specific assets – copyrighted works, commercial personas, health data – but not the broader concept of “digital identity.” A scholar’s characteristic argument style, a musician’s phrasing, a writer’s voice: these are not ownable under current frameworks, even though they are what AI systems replicate most effectively.
State Law Variations and Forum Shopping
The state-by-state patchwork of postmortem publicity rights creates perverse incentives. California, Indiana, and Nevada have broad statutes with long durations. Ohio requires domicile at death. Washington and Hawaii have extraterritorial reach, applying “regardless of domicile.” The American Bar Association notes that courts in Indiana and Washington have struck down attempts to apply these laws to non-domiciliaries, reasoning that allowing multiple states to claim jurisdiction would create chaos.
The result is that estates must determine: where was the deceased domiciled at death? Does that state recognise postmortem rights? If not, can another state’s law apply based on where the infringement occurred? These questions are expensive to litigate and often have uncertain answers. AI companies can incorporate in Delaware, train models in Oregon (no postmortem right), and deploy globally, insulating themselves from liability in states with stronger protections.
| State | Duration | Domicile Required? | Commercial Value Required? |
|---|---|---|---|
| California | 70 years | Yes | Yes |
| New York | 40 years | Yes | Yes (performers/personalities) |
| Indiana | 100 years | No (if act in state) | No |
| Tennessee | 10 years (renewable) | Yes | Yes |
| Minnesota | None | N/A | N/A |
| Wisconsin | None | N/A | N/A |
Digital Replicas and Synthetic Media
New York became the first state to explicitly address digital replicas of deceased performers in 2020. According to the International Documentary Association, the law requires consent from heirs before creating a “computer-generated, electronic performance” that is “so realistic that a reasonable observer would believe it is a performance by the individual.” The statute went into effect in May 2021 and applies only to individuals who died on or after that date and were domiciled in New York.
California followed in early 2026 with Senate Bill 8391, strengthening protections for deceased performers. The law now requires prior consent from heirs before using a deceased performer’s digital replica in audiovisual works, replacing the old framework that only required disclaimers. Platforms must remove unauthorised content upon notice. These laws are narrow – they cover performers, not academics, journalists, or ordinary individuals – and they do not address AI training, only final output.
The broader question of whether training an AI model on deceased individuals’ data constitutes “use” of their identity remains unresolved. California’s AI Transparency Act (SB 942), which took effect in January 2026, requires generative AI providers to disclose training data sources, but it does not prohibit using data from the deceased. Transparency is not the same as consent.
Estate Management in the AI Era
Estate planning has traditionally focused on financial assets, property, and copyrighted works. Digital identity now requires equal attention. CipherWill and similar digital estate services recommend that individuals create explicit directives covering social media accounts, cloud storage, email archives, and the use of their likeness or writing style in AI systems. Some celebrities have taken aggressive steps: Robin Williams’ estate restricted use of his image, voice, and likeness for 25 years after his death to prevent commercial exploitation.
But most individuals lack the resources or foresight to plan this comprehensively. Academic research from arXiv proposes three principles for postmortem data management: the right to be forgotten (verifiable deletion of data and model influence after death), data inheritance and ownership (controlled transfer of data rights to heirs), and purpose limits (restricting use to transparent, consented purposes). These principles do not yet have force of law in any jurisdiction, but they provide a template for legislation.
The challenge is technical as well as legal. Even if an estate demands deletion, removing a person’s influence from a trained AI model is not straightforward. Models encode patterns across billions of parameters; there is no “delete” button for a specific individual’s contribution. Machine unlearning – techniques to remove specific data influences from trained models – remains an active research area with limited practical deployment.
- Inventory all digital assets: social media accounts, email, cloud storage, published works, recorded media
- Designate legacy contacts on platforms that support them (Facebook, Google, Apple)
- Include explicit language in wills prohibiting AI training on personal data or commercial use of identity
- Register postmortem publicity rights where applicable (New York requires registration with the Secretary of State)
- Document consent preferences for AI replication, memorial chatbots, and biographical use
The “Digital Afterlife Industry” and Griefbots
A parallel issue is the commercialisation of deceased individuals through AI-driven memorial services. According to The Conversation, the “digital afterlife industry” offers services that create AI chatbots trained on a deceased person’s messages, emails, and social media posts. These “griefbots” or “deathbots” are marketed to the bereaved as tools for grief processing. Services like Replika and Character.AI allow users to build conversational agents modelled on deceased loved ones.
The ethical and psychological risks are significant. Research indicates that dependence on AI replicas may interfere with healthy grief processing. There is also the risk of “drift” – as probabilistic models evolve, the chatbot may lose resemblance to the original person, generating responses the deceased would never have endorsed. Australian law, for example, does not recognise proprietary rights in personal identity, voice, or likeness, leaving families with little recourse if a griefbot misrepresents their loved one.
Copyright law does not solve this either. While the original training data (emails, photos) may be copyrighted, the AI’s autonomous output is not. If a chatbot generates a new sentence “in the voice of” the deceased, that output is not a copyrighted work and cannot be controlled by the estate.
Proposals and Emerging Legislation
Several jurisdictions are testing new approaches. France’s 2016 Digital Republic Law allows individuals to issue directives about the use of their personal data after death, including deletion or transfer to designated parties. The law gives the deceased a form of “preventive narration” – the ability to script how their data will be handled posthumously. Italy’s Data Protection Code allows any interested party, such as a family member, to exercise data subject rights on behalf of the deceased, though this does not require adherence to the deceased’s wishes.
In the US, federal action remains absent. Some advocates have called for a national postmortem publicity right to replace the state patchwork, but such legislation faces First Amendment concerns. As The American Bar Association notes, any federal right would need to balance estate interests against free expression, news reporting, and artistic use – a balance that has proven difficult to strike.
California’s recent wave of AI legislation offers a glimpse of possible future direction. The state now requires AI transparency, restricts synthetic performer use in advertising, and strengthens postmortem performer rights. But these laws are reactive, addressing symptoms rather than the underlying issue: AI systems treat human identity as training data by default, and the law has no concept of identity as a protectable interest distinct from copyright or publicity.
The Collision Ahead
The gap between technological capability and legal protection is widening. AI companies can now replicate voice, writing style, and even reasoning patterns from relatively small datasets. The cost of creating a synthetic version of a deceased scholar, musician, or public figure is falling rapidly. Meanwhile, legal remedies remain expensive, uncertain, and geographically fragmented.
The question is not whether regulation will come, but what form it will take. A property-rights model – treating digital identity as an inheritable asset – would align with existing estate law but risks commodifying human identity. A consent-based model – requiring explicit permission for any commercial use – would protect individual autonomy but may conflict with free expression and research. A deletion-rights model – allowing estates to demand removal from training datasets – would empower families but raises technical challenges around enforcement.
For now, the absence of clear rules leaves three groups navigating uncertainty: AI companies unsure which uses will later be deemed unlawful, estates lacking tools to enforce rights they may not legally possess, and individuals whose digital afterlives are being shaped by algorithms rather than intent.
Related Coverage
This explainer provides the foundational context for understanding postmortem digital identity rights. For coverage of the current Grammarly controversy and related developments, see:
- Grammarly Faces Backlash for AI Feature Using Dead Academics Without Consent
- California Judge Denies Musk’s Bid to Block AI Training Data Disclosure Law
- Anonymous Credentials Emerge as Privacy Solution Amid KYC Data Breach Crisis
- The $50 Billion Race to Solve AI’s Storage Problem
For broader context on data privacy and AI governance, see our ongoing coverage of data-sharing investigations and digital identity verification challenges.