deadbots
(Unsplash)

Future Humans Could be Haunted by Digital “Ghosts” of Their Dead Loved Ones, Researchers Warn

Cambridge researchers are warning that people could become unintended targets of emotional distress from AI “deadbots” resembling aspects of their deceased loved ones, in a study highlighting the ethical and psychological implications of emulating the dead through technology.

The groundbreaking study, undertaken by researchers with Cambridge’s Leverhulme Centre for the Future of Intelligence, discusses the potential creation of deadbots as just one of the many ways experts are now warning that well-meaning AI technologies could have unintended consequences in the absence of proper safeguards in their design.

In their study, researchers Tomasz Hollanek and Katarzyna Nowaczyk-Basińska define “Deadbots” or “Griefbots” as AI that mimic the deceased by using digital traces left behind by these individuals to emulate them in the form of an intelligent chatbot. An example they provide involves the online platform known as Project December, which allows users to engage in text-based simulated conversations with virtually anybody, including “someone who is no longer alive.”

“The platform’s earlier version came under public scrutiny when stories about a man who used the [Project December] website to interact with his deceased fiancée’s avatar started to circulate the web in 2021,” the researchers note in their paper. Project December, initially powered by OpenAI’s GPT-3 model, was subsequently disallowed from using the software, which, according to OpenAI, resulted from the website’s “failure to abide by its safety guidelines.”

Hollanek and Nowaczyk-Basińska argue that among the risks that may emerge from such technologies, companies could potentially exploit chatbots for commercial purposes that include advertisement. In other instances, some users—particularly children—could be confused or harmed because of the apparent suggestion that a deceased parent or loved one is still with them.

A hypothetical scenario outlined in the study focuses on a non-existent company Hollanek and Nowaczyk-Basińska call “Stay,” which allows its users to intentionally provide information about themselves in the intentional creation of deadbots of themselves. As a hypothetical scenario, an elderly person decides to use the service, signing up for a 20-year subscription to Stay prior to their death, hoping that it can be used to comfort their adult children, but also to ensure that their grandchildren were given an opportunity to “know them” even though they were born after their passing.

In accordance with the services the company provides, a deadbot is created after the individual dies, and Stay begins to send emails to the adult children, which feature the voice of their dead parent. One of the children chooses not to engage, while the other does, eventually leading to emotional exhaustion and guilt over what might eventually happen to the deadbot. Adding to the challenge, suspension of the service would be out of the question since it would constitute a violation of the terms of the contract the parent originally signed with the company prior to their death.

Hollanek says that although this is a hypothetical scenario, it outlines the reasons why any digital afterlife services must consider the rights of the living family members and not just deceased individuals upon which any potential deadbot may be based.

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost,” Hollanek said in a statement. “The potential psychological effect, particularly at an already difficult time, could be devastating.”

Fundamentally, Hollanek and Nowaczyk-Basińska say that obtaining explicit consent from individuals before their data is used to create a deadbot must become a priority. This would allow the legacy of the deceased and the emotional well-being of the living to be respected.

Along with design protocols to help ensure ethical and consensual use of such programs, the study also argues that means by which an individual’s digital persona can be respectfully “retired” in something like a digital funeral should be available. This should be in addition to safeguards that help ensure that an individual knows it is merely AI they are interacting with in the first place.

With the development and availability of such technologies, the necessity for ethical guidelines and thoughtful regulations will become ever-more critical in helping to prevent unintended psychological distress to individuals and to ensure that the memories of their loved ones are handled with respect and care.

However, the potential issues outlined in their paper also should not be treated merely as hypotheticals or as issues that perhaps could happen at some point in the future, Nowaczyk-Basińska warns.

“We need to start thinking now about how we mitigate the social and psychological risks of digital immortality,” Nowaczyk-Basińska says, “because the technology is already here.”

Hollanek and Nowaczyk-Basińska’s paper, “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry,” was published in Philosophy and Technology on May 9, 2024.

Micah Hanks is the Editor-in-Chief and Co-Founder of The Debrief. He can be reached by email at micah@thedebrief.org. Follow his work at micahhanks.com and on X: @MicahHanks.