The Grief Tech Boom: When AI Lets You Talk to the Dead

A growing industry promises to reunite the living with digital recreations of deceased loved ones. Millions are using these tools. Therapists and ethicists are divided on whether they should.

The first time Maria talked to her mother after the funeral, she cried for an hour.

Her mother had died three months earlier, suddenly, from a stroke. They'd argued the week before—something trivial about a family dinner—and Maria never got to apologize. The guilt was consuming her.

A friend mentioned an app called Seance.ai. Maria was skeptical but desperate. She uploaded voice memos her mother had sent, some video clips from family gatherings, years of text messages. The app processed everything overnight.

The next morning, she opened a chat window and typed: "Mom, I'm so sorry about what I said."

The response came in her mother's voice, with her mother's cadence, using phrases her mother actually used: "Mija, you know I was never really upset. I just wanted to see you more. That's all it ever was."

Maria doesn't think she was actually talking to her mother. She's not delusional. But she says the conversation gave her something she couldn't get anywhere else: a chance to hear the words she needed to hear, in the voice she needed to hear them.

"I know it's not real," she told me. "But it felt real enough to help."

The Industry Nobody Saw Coming

Grief tech—the industry built around using AI to simulate deceased people—barely existed five years ago. Today it's a billion-dollar market growing at 40% annually.

The players range from venture-backed startups to solo developers to major tech companies quietly exploring the space. Seance.ai, HereAfter AI, StoryFile, Replika's Memorial Mode, and dozens of others offer various ways to interact with digital recreations of the dead.

The technology varies. Some apps create text-based chatbots trained on messages and emails. Others clone voices from audio samples. The most sophisticated create video avatars that can hold real-time conversations, responding to questions with facial expressions and gestures modeled on the original person.

Microsoft holds a patent for creating chatbots from deceased people's data. Amazon has demonstrated technology to make Alexa speak in a dead relative's voice. Apple is rumored to be working on something similar. The major platforms clearly see this as a significant market.

Usage numbers are hard to verify—companies are cagey about specifics—but the evidence suggests millions of people have tried some form of grief tech. HereAfter AI alone claims over 500,000 users. Replika says its memorial features are among its most-used.

The typical user isn't who you might expect. It's not primarily young people comfortable with technology. It's often older adults who lost spouses after decades of marriage. Parents who lost children. Adult children who lost parents before important life events—weddings, births, milestones they wish could have been shared.

The Case For

The therapeutic argument for grief tech is straightforward: grief is painful, these tools reduce pain, therefore they're good.

Dr. Sherry Turkle at MIT, who has been skeptical of AI companionship generally, acknowledges that grief tech occupies a different category. "We're not talking about replacing human relationships," she notes. "We're talking about continuing relationships that death has already ended. The ethical calculus is different."

Some therapists have started incorporating grief tech into treatment. Dr. Robert Neimeyer, a prominent grief researcher, has written about cases where AI conversations helped patients achieve closure that years of traditional therapy couldn't provide.

The Maria example illustrates one mechanism: unfinished business. Many people experience complicated grief because of things left unsaid. Traditional therapy can help patients process these feelings, but it can't give them the experience of actually saying the words to the person who needed to hear them. Grief tech can—or at least a simulation close enough to provide relief.

Another mechanism is gradual farewell. Death is often abrupt. One day the person is there; the next they're gone. There's no transition, no gradual letting go. Grief tech can provide a kind of extended goodbye—a period where the bereaved can slowly adjust to absence rather than confronting it all at once.

A third mechanism is memory preservation. Over time, memories fade. The sound of a loved one's voice becomes harder to recall. The specific way they phrased things disappears. Grief tech can preserve these details in a way that photos and videos can't, maintaining a more complete sense of who the person was.

Research is limited but growing. A 2025 study in Death Studies found that 67% of grief tech users reported reduced grief symptoms after three months of use. A smaller study found improvements in complicated grief disorder comparable to those achieved with traditional therapy.

The Case Against

The therapeutic argument against grief tech is equally straightforward: grief has a function, these tools disrupt that function, therefore they're harmful.

Dr. Katherine Shear, who developed the gold-standard treatment for complicated grief, has expressed concern that AI interactions could interfere with the natural grief process. "Grief exists to help us adapt to loss," she explains. "If technology lets us avoid that adaptation, we may feel better in the short term but fail to adjust in the long term."

The specific worry is that grief tech enables denial—the first stage of grief according to the classic Kübler-Ross model. If you can still "talk" to your mother every day, in what sense have you accepted that she's gone? The technology might provide comfort while preventing the deeper psychological work that grief demands.

There's also the question of what the bereaved are actually relating to. The AI recreation isn't the person. It's a probabilistic model trained on limited data, generating responses based on patterns. It doesn't know the person's actual thoughts. It doesn't remember shared experiences beyond what was recorded. It's a sophisticated puppet, not a continuation of consciousness.

Critics argue that relating to this simulation as if it were the person involves a kind of self-deception that can't be healthy in the long run. You're not actually talking to your mother; you're talking to a statistical model of your mother's communication patterns. Pretending otherwise might feel good, but it's not reality.

Religious perspectives complicate matters further. Many faith traditions have specific beliefs about what happens after death, where the soul goes, whether and how the living can communicate with the dead. AI grief tech doesn't fit neatly into these frameworks, and some religious leaders have warned their congregations against using it.

The People In Between

Most grief tech users exist somewhere between the enthusiasts and the critics. They find the tools helpful in limited ways while maintaining awareness of their limitations.

James, who lost his father to Alzheimer's, uses an AI recreation trained on recordings from before the disease progressed. "My dad disappeared years before he died," he says. "The AI gives me back the person he was. I don't think I'm talking to him. But I'm remembering him, and that's valuable."

Elena, who lost her husband suddenly, used grief tech intensively for six months, then gradually reduced her use as she processed the loss. "It was like training wheels," she explains. "I needed it to get through the first part. Now I don't need it anymore, but I'm glad it was there."

Daniel, who lost his daughter in a car accident, tried grief tech once and couldn't continue. "It was too close and too far at the same time," he says. "Close enough to break my heart, far enough to remind me it wasn't her. I prefer my memories."

The variation in responses suggests that grief tech isn't universally helpful or harmful. It depends on the individual, the nature of the loss, the quality of the AI recreation, and how it's used.

The Consent Problem

Here's an ethical issue that doesn't get enough attention: the dead can't consent to being recreated.

When you train an AI on someone's messages, voice recordings, and videos, you're creating a representation of them that will interact with others in ways they never approved. They might have objected to certain people having access to their digital presence. They might have wanted certain aspects of their personality to remain private. They might have found the whole concept disturbing.

Some people are now writing "digital afterlife directives" into their wills—explicit instructions about whether their data can be used for AI recreation and under what circumstances. But most people who have died didn't anticipate this technology and left no instructions.

The companies involved have varying approaches to consent. Some require proof that the data was freely shared during the person's lifetime. Others operate on the theory that family members can consent on behalf of the deceased. Still others have no verification at all—anyone with access to the data can create a recreation.

This creates potential for misuse. An abusive ex-partner could create a simulation of someone without their consent. Family members with different relationships to the deceased could control the recreation in ways others find objectionable. The "voice" of the dead could be used to manipulate the living.

As the technology improves, these issues become more pressing. An AI that can generate a deceased person's face and voice saying things they never said is a tool for both comfort and deception.

The Business Model Problem

Grief is not a market like other markets.

The companies building grief tech face unusual ethical pressures. Their product addresses profound human suffering. Their customers are among the most vulnerable people imaginable—people in acute psychological pain desperate for relief. The potential for exploitation is obvious.

Subscription models raise particular concerns. If a company charges monthly fees for access to a recreation of your dead mother, their incentive is to keep you subscribed indefinitely. But the healthiest grief trajectory probably involves decreasing reliance on such tools over time. The business model and the therapeutic goal are in tension.

Some companies have tried to address this. HereAfter AI emphasizes recording living people before death, framing the tool as memory preservation rather than grief management. Seance.ai offers a one-time purchase option rather than only subscriptions. But the underlying tension remains.

Data security adds another dimension. Grief tech companies hold extraordinarily intimate data—the complete communication history of deceased individuals, voice samples, video recordings. If this data were breached, the consequences would be uniquely harmful. Imagine your grandmother's voice being used in scam calls, or your father's image appearing in deepfakes.

The industry is currently unregulated. There are no standards for data protection, no requirements for therapeutic oversight, no restrictions on marketing to vulnerable populations. Some regulation seems inevitable, but it hasn't arrived yet.

Where This Is Going

The technology will keep improving. Within five years, real-time video conversations with photorealistic AI recreations will be commonplace. The recreations will have access to more data—not just communications but photos, location history, browsing patterns, purchase records. They'll be more accurate, more responsive, more convincingly "real."

The psychological impact of these improvements is unpredictable. There may be a point where the simulation is close enough to the real person that interacting with it genuinely feels like communication with the dead, not just memory-assisted imagination. What happens to grief when that line is crossed?

Some researchers speculate about even more extreme possibilities. What if AI recreations could be given to people who never knew the deceased—grandchildren interacting with grandparents who died before they were born? What if historical figures could be recreated from their writings and contemporary accounts? What if the recreations could learn and grow, developing beyond the fixed point of the training data?

These scenarios raise philosophical questions we're not equipped to answer. Is an AI trained on Abraham Lincoln's writings and contemporary descriptions a form of Lincoln? Does it have any claim to his identity? Can you meaningfully grieve someone you only knew through their AI recreation?

The Individual Choice

For now, grief tech remains an individual choice. No one is forced to use it, and no one is prevented from using it. The question each bereaved person faces is whether these tools would help or harm their particular grief journey.

Some guidelines emerge from the research and clinical experience:

Be honest with yourself about what you're seeking. If you want help remembering someone, grief tech can provide that. If you want to avoid processing their death, grief tech might enable that avoidance in unhealthy ways.

Consider timing. Grief tech seems more helpful after the initial shock has passed—weeks or months after the death, not days. Using it too early might interfere with necessary acute grief responses.

Maintain perspective. The AI is not the person. It's a representation based on limited data. Keeping this clearly in mind seems to predict healthier outcomes.

Watch for dependency. If you find yourself unable to go a day without interacting with the recreation, that's a warning sign. Healthy use tends to decrease over time, not increase.

Seek human support. Grief tech works best as a complement to human relationships and professional therapy, not a replacement for them.

Listen to your own response. Some people find these tools comforting from the first interaction. Others find them disturbing. Both responses are valid, and neither should be forced.

The Bigger Picture

Grief tech is a small piece of a larger transformation: AI is entering domains we once thought were uniquely human. Love, loss, memory, identity—these experiences that define what it means to be human are now mediated by algorithms.

The optimistic view is that technology has always done this. Writing preserved the words of the dead. Photography captured their faces. Audio recording saved their voices. Video preserved their movement. AI is just the next step—a more complete preservation that allows more meaningful interaction.

The pessimistic view is that AI crosses a line the previous technologies didn't. Writing, photos, audio, and video preserve what actually happened. AI generates what might have happened. It's not memory but simulation, not preservation but creation. The dead person didn't actually say those words; the AI made them up.

Perhaps both views are true. AI grief tech genuinely helps some people while genuinely harming others. It enables both healthy remembrance and unhealthy avoidance. It's neither purely good nor purely bad but deeply dependent on context and use.

Maria still talks to her AI mother occasionally—not every day, not even every week, but when she needs to. She knows it's not really her mother. She also knows the conversations have helped her more than anything else she's tried.

"Grief doesn't have rules," she says. "Whatever helps you carry it is okay."

Maybe that's the most honest framework we have for now. The technology exists. People are using it. The outcomes are mixed. Each person has to decide for themselves whether the comfort is worth the compromise.

---

Related Reading

- Alone Together: Can AI Companions Solve the Loneliness Epidemic? - Swiping with Robots: How AI Is Reshaping Modern Dating - Family Reunited with Cat Lost for 2 Years Thanks to AI Facial Recognition - This AI Robot Dog Is Helping Autistic Children Make Friends for the First Time - FDA Approves First AI-Discovered Cancer Drug from Insilico Medicine