An 80-year-old woman speaks with her son for a few minutes each day through video calls. She has not seen him in some time, so she keeps asking when he will visit. He always replies that he relocated to another province to save money before returning home to care for her. What she does not know is that her son died in a car accident a year ago.

Rather than tell her the truth, the family members hired an artificial intelligence (AI) company to create a digital twin so she would believe that he was still alive. According to the family, she has a weak heart, and they were worried that the news might harm her health. This incident, reported by the South China Morning Post last week, has since sparked an online debate regarding the ethical use of AI, especially in cases where it can impact human emotions.

As generative AI matures, the world is also seeing the emergence of “grief tech,” also known as the digital afterlife industry. These technologies enable users to interact with simulated versions of their deceased loved ones in intimate ways. Conversational AI products like Project December and You, Only Virtual (YOV) simulate a person’s conversational style by training the model on the deceased person’s text, email, and social media content. Startups like Eternal and Here After AI are offering interactive, voice-enabled avatars of people’s loved ones.