Ethics & Society 12 min read

AI Grief Technology and the Ethics of Digital Afterlives

J

Jared Clark

May 02, 2026

There is a startup that will let you talk to your dead mother.

Not metaphorically. Not in the therapeutic sense of writing her a letter or sitting quietly with her memory. Literally — you upload her texts, her emails, her voice recordings, maybe a few hours of video, and the company trains a model on all of it. Then you open an app and have a conversation. She responds in her phrasing. Her cadence. The way she used to end sentences. And if the dataset is rich enough, you might forget, just for a moment, that she is gone.

This is happening now, and it is growing fast. The global market for AI memorial and grief technology is projected to reach $1.8 billion by 2030, up from roughly $350 million in 2023. Several companies — HereAfter AI, StoryFile, Replika's memorial features, and others — are already offering versions of this. A 2023 survey by researchers at MIT found that 72% of respondents said they would consider using an AI simulation of a deceased loved one if it were available. The technology is not speculative. The ethical questions are not hypothetical.

So I want to think through this carefully, because I think most of the public conversation is getting it wrong — either dismissing the technology as ghoulish and dystopian, or cheerleading it as a breakthrough in grief support. Both reactions are too simple, and neither of them sits with the hardest question, which is: what does it actually cost to do this, and who pays that cost?

What the Technology Actually Does

Before the ethics, it's worth being precise about what these systems are and are not.

An AI grief simulation is trained on digital traces of a person — their writing, speech patterns, photographs, videos, stated preferences, and personal history. The model learns to approximate how that person communicated: their vocabulary, their emotional register, their characteristic responses to certain kinds of topics. What it produces is a statistically coherent simulation of that communication style.

What it does not produce is the person. This sounds obvious, but it matters more than people tend to acknowledge. The model has no continuous experience. It does not remember your last conversation unless you feed it a transcript. It does not have the person's values, only a surface-level approximation of how those values were expressed in text. It cannot grow, change its mind, or be genuinely surprised by you. It is, at bottom, a very sophisticated pattern-matcher shaped to sound like someone who no longer exists.

That distinction — between a simulation of communication style and an actual person — is where almost every ethical question in this space begins.

What Grief Actually Is

I think you have to understand grief before you can say anything useful about technology designed to intervene in it.

Grief is not simply pain at the absence of someone. It is a process — disorganized, non-linear, and in some ways necessary — through which we reorganize our internal world to account for a permanent loss. Psychologists sometimes describe this as "continuing bonds theory": we do not fully sever our relationship with the dead, but we gradually transform it from a living relationship into an internalized one. The person becomes part of how we see the world, part of the inner voice we carry, part of who we are.

That transformation takes time and it takes friction. The ache of missing someone is part of how we integrate them. Numbness delays integration. Distraction delays integration. And here is the question worth sitting with: does a realistic simulation of the deceased delay integration too?

There is some evidence that it might. A 2022 study published in Frontiers in Psychology found that bereaved individuals who maintained highly active "parasocial" relationships with idealized representations of the deceased showed slower progress through grief processing than those who found ways to internalize the loss. The simulation, in other words, might let people avoid the very pain that is doing the work.

That is not a reason to ban the technology. But it is a reason to take it seriously as an intervention with real effects, not just a comfort feature.

Here is the part that I find most troubling, and I have not seen it addressed adequately anywhere in the public debate.

Did the person whose data is being used consent to this?

In most cases, no. A person who died in 2015 left behind a decade of Facebook posts, emails, and text messages. They did not, in any meaningful sense, consent to having that corpus used to train a model that will speak in their voice to their surviving family members. They did not agree to be reconstituted. They did not choose what the model would say or how it would represent their views.

This matters because the simulation is not neutral. It will produce responses to questions the person never faced. It will speak to situations they never encountered. And it will do so in a way that family members may experience as authoritative — as if it reflects what the person would have actually thought or said. A grieving child asking a simulation of their dead parent whether they would be proud of a life decision is not getting an answer from their parent. They are getting a statistically plausible approximation generated by a system that has no access to the parent's actual values, only to the surface patterns of how those values were once expressed.

That approximation could be harmful. It could be comforting in ways that mislead. It could, in extreme cases, generate responses the real person would have found deeply objectionable.

Current legal frameworks offer almost no protection here. In most jurisdictions, digital estate law is still in its early stages, and the right to control your own posthumous digital representation is effectively unprotected. Some states — California, Virginia — have moved toward establishing postmortem personality rights, but the coverage is thin and inconsistent. Internationally, the landscape is even more fractured.

Who Benefits, and Who Bears the Risk

There is a structural tension in this market that deserves attention: the people most emotionally vulnerable to these products are the ones least equipped to critically evaluate them.

Acute grief is one of the most cognitively and emotionally disorienting states a human being can experience. Concentration drops. Decision-making is impaired. Emotional reasoning dominates. A person three weeks out from losing a spouse is not in the best position to evaluate whether a subscription service offering simulated conversations with their dead partner is a good idea. And that is exactly when these companies tend to acquire customers.

The business model here is worth naming plainly. Most grief AI services run on subscriptions — monthly or annual fees to maintain access to the simulation. Which means that keeping users engaged with the simulation is financially aligned with keeping users from fully processing the loss. I am not accusing any specific company of deliberately prolonging grief for profit. But the structural incentive is there, and it should be examined.

Compare this to how we regulate other interventions in grief. Therapists have ethics codes. Grief counselors are licensed. Pharmaceutical companies that make anxiety medications must disclose risks and go through clinical trials. Grief AI products, in most markets, have none of these guardrails. They launch, they acquire users, and the question of what effect they're actually having gets answered slowly, if at all.

The Digital Afterlife Comparison Table

Feature Traditional Grief Support AI Grief Simulation
Trained professional oversight Yes (therapists, counselors) Typically none
Informed consent from deceased Implicit social norms Rarely established
Grief processing outcome Generally supports integration Evidence mixed, possibly delays
Regulatory framework Licensed, standardized Largely unregulated
Financial incentive alignment Session-based; time-limited Subscription; engagement-based
Risk of distorted memory Low High (model produces novel responses)
Scalability Low High
Cultural/religious sensitivity Varies; at least considered Rarely built into product design

The comparison is not meant to romanticize traditional grief support — it has its own serious gaps, particularly in access and affordability. But the table makes visible a set of asymmetries that get glossed over when grief AI is discussed primarily as a technological achievement.

The Case for the Technology

I want to be fair here, because I think the strongest case for grief AI is genuinely compelling and shouldn't be waved away.

For many people, there is no closure. Sudden deaths — accidents, suicide, unexpected illness — leave relationships interrupted in the middle. Things unsaid. Questions unanswered. Some grief therapists use techniques like the "empty chair" exercise, where a client speaks to an imagined version of the deceased precisely because the act of expressing those things can be therapeutically valuable, even when no one is there to receive them. A grief AI might serve a similar function — a way of externalizing and processing the things that never got said.

There is also a compelling preservation argument. A simulation of a grandparent trained on their letters, stories, and voice recordings might give grandchildren who never met them a richer sense of who they were than a photograph and a name on a family tree. That is a meaningful thing. It's not the person — but neither is a biography, and we have always found value in biographical records of the dead.

And for isolated grievers — people without strong support networks, people in communities where grief is stigmatized or underdiscussed — a simulation might provide something that nothing else currently does. The access argument matters. Grief therapy is expensive and often unavailable. If a $12 monthly subscription provides genuine comfort and a sense of connection during an acute period of loss, that is not nothing.

The question is whether these benefits can be preserved while the harms are taken seriously. I think they can, but it would require significant changes to how the industry operates.

What Responsible Development Would Look Like

A few things seem clear to me, even if the full framework is still being worked out.

Informed consent for the living and the dead. Digital wills and posthumous data consent agreements should be standard. If you want your data used to train a simulation, you should be able to specify that clearly — and if you don't, your family should not be able to override your silence without a much more robust ethical review. Companies should not accept training data without documented consent from the data's source, or from a designated estate executor.

Disclosed limitations, prominently. Every interaction with a grief simulation should carry a clear reminder of what it is — a model trained on communication patterns, not the person. The same way a drug label lists what the drug is and is not for, a grief AI product should be transparent about what kind of artifact the user is interacting with. This sounds basic, but most current products bury this in terms of service, if they address it at all.

Time-bounded by default. There is a reasonable case that grief simulations should be designed with a built-in therapeutic arc — beginning with more intensive use and tapering toward integration, rather than operating as a persistent subscription with no end state. Some grief apps already do this. More should.

Independent clinical review. The question of whether these products help or harm should be studied by researchers with no financial stake in the answer. Some preliminary work is being done at institutions like MIT's Media Lab and the Stanford Digital Civil Society Lab, but it is early and underfunded compared to the pace of product launches.

The Deeper Question

I keep coming back to something that the debate around grief AI tends to avoid, and I think it is the most important question of all.

What is our relationship with the dead supposed to be?

Every human culture that has ever existed has had some answer to this question, and the answers vary enormously. Some traditions venerate ancestors as active presences who continue to influence the living. Others draw a clean boundary — the dead are gone, and the task of the living is to let them go and carry on. Most fall somewhere in the middle: we hold the dead in memory, we are shaped by them, we speak of them in the present tense in ways that don't quite make sense if we stop to think about it — and all of this is part of what makes us human.

AI grief technology does not create the impulse to maintain connection with the dead. That impulse is ancient. What it does is give that impulse a new, commercially mediated, technically precise form — and it does so faster than we have developed the cultural or ethical vocabulary to evaluate it.

The question is not whether we will mourn. The question is what kind of mourning we will build our technology to enable, and whether the technology serves us or gradually replaces something irreplaceable with something that just sounds like it.

I do not think AI grief simulations are inherently monstrous. I think they are a genuinely powerful tool being deployed in a field where power without care causes real harm. And I think the gap between the pace of deployment and the pace of ethical reflection is, right now, dangerously wide.

What worries me most is not the technology itself. It is the assumption, baked into the business model, that more connection — even simulated, even asymptotic, even purchased — is always better than less. Grief is not a problem to be solved. Some of what it does to us is the point.


Last updated: 2026-05-02

Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society. Read more at prepareforai.org.

Related reading: How AI Is Reshaping Human Identity | The Institutional Blindspot in AI Ethics

J

Jared Clark

Founder, Prepare for AI

Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.