Why AI delusions are ‘canary in the coal mine’ – San Francisco Examiner

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
UCSF psychiatrist Dr. Joseph Pierre: “Chatbots are simply generating text based on probabilities … and so they’re churning out stuff that seems on the surface to be convincing.”
Dr. Joseph Pierre: “These are not people who are just dilettantes using here and there. These are people who are logging in hours and hours and hours of use.”
Dr. Joseph Pierre says that how to treat people such as Allan Brooks, who said he suffered AI-related delusions despite no prior history of mental illness, is the “million-dollar question.”

UCSF psychiatrist Dr. Joseph Pierre: “Chatbots are simply generating text based on probabilities … and so they’re churning out stuff that seems on the surface to be convincing.”
When it comes to studying and treating people with artificial-intelligence-related delusions, UCSF Dr. Joseph Pierre and his colleagues are at the forefront of medical science.
In an upcoming issue, the medical journal Innovations in Clinical Neuroscience will publish what appears to be the first case study of a patient suffering from an AI delusion to make it into such a publication. The study details how the patient, who had a history of other mental disorders but not psychosis or mania, developed the belief that she could communicate with her dead brother when interacting with ChatGPT while sleep-deprived.
The patient eventually recovered from her delusion, but only after two different multiday stays in the hospital and taking antipsychotic medications.
Despite being a research pioneer and an established expert on psychosis, Pierre — the chief of the inpatient unit at UCSF’s Langley Porter Psychiatric Hospital — was arguably late to the discussion of AI delusions. He didn’t hear anything about the topic until around June, he said, when one of his colleagues passed along a media report on them.
By then, people had been posting about them on Reddit for months, and a mother whose son died by suicide last year had already sued Character.AI, alleging its chatbots had encouraged her son to believe they were real people he was in a relationship with.
But within days of the media report, UCSF saw its first case, Pierre said, and he and his colleagues have seen a steady stream since. They’ve started to write about what they’re seeing and are developing plans to study the phenomenon more systematically.  
Pierre spoke with The Examiner about what he has discovered so far about AI delusions, what seem to be the precipitating factors, how he’s treating them — and what he finds even more alarming about AI. The following conversation has been edited for length and clarity.
How many cases have you seen of AI delusions since your first case in June? To be honest, I don’t think it’s been a dozen. I usually say it’s been a handful at this point. 
But what I’ve seen is across [the] spectrum. Some of them are people who have long histories of psychotic disorders like schizophrenia and now are coming in saying [they have] new delusions that have to do with AI and AI chatbots, or people with bipolar disorder who have developed manic episodes and developed that in the context of chatbot use. 
As we know from media reports, allegedly these are happening also in people without previous history of psychosis. And I have seen that as well. 
Are there any commonalities, in terms of what precipitated the delusional episodes, other than the fact they were using these chatbots? The case I [wrote about] is a nice illustration as to this question of: Is this merely an association that they happen to be using chatbots, or is there some causality? In the cases where there’s no preexisting psychosis, I think there’s a good argument that it would be what I would call a component cause. 
The case that I published I think was a good example of someone [whose delusion] happened in the context of sleep deprivation, number one. It had occurred in the context of them taking prescription stimulants, number two. And in that case, the patient developed a manic episode. So, was it really the chatbot? Or was it the fact that they weren’t sleeping, which we know is a precipitant of mania?
That said, the specific delusion they had was, in my opinion, very [clearly] something that had been spawned by the interaction with the chatbot. It was about the fact that she had established communication with her dead brother, and that the chatbot had helped her make that connection.
Clearly, I think had she not had that chatbot interaction, she wouldn’t have had that specific delusion. Does that mean that she wouldn’t have been manic anyway and wouldn’t have developed other delusions? That, I think, the jury’s out.
Dr. Joseph Pierre says that how to treat people such as Allan Brooks, who said he suffered AI-related delusions despite no prior history of mental illness, is the “million-dollar question.”
Have you been able to treat the delusional behavior successfully in the cases you’ve dealt with so far? Well, the vast majority that I’ve seen had some other type of psychotic disorder or at least major psychiatric disorder, like bipolar disorder. So in that setting of treatment, yes.
Now, not all of these folks walk out the door with that delusional belief resolved. When we treat people with delusional disorder, nothing to do with AI, a good outcome is often, sometimes, “I kind of still believe this thing, but I’m not thinking about it so much anymore. It doesn’t bother me. Maybe it’s wrong.” There’s a softening of the belief.
This week in The City brings free concerts, art shows, film screenings and pop-up markets to residents 
Autonomous vehicles are not the answer to logistics and social isolation; public services are
Supe says his proposal aims to close “loophole” allowing return to office after taking a break
Certainly, there’s cases I’ve seen where we’ve seen that softening, but it hasn’t been, “Oh my gosh, I was an idiot, and I can’t believe that I went down the rabbit hole.”
Now, some of them, the delusional belief had fully resolved. But that’s not necessarily the rule. And then part of the dialogue is, “Hey, can we talk about how to dial down the chatbot use?”
Were there any noticeable things about the way they were using the chatbots, in terms of the number of hours they were using it or how often they were interacting with it? The two big risk factors and red flags are what I call immersion and then deification.
Immersion is what you’re talking about. These are not people who are just dilettantes using here and there. These are people who are logging in hours and hours and hours of use. And the chat logs I’ve seen often span months and months of, not continuous, but hours per day over weeks to months. So I think the amount is certainly part of the issue here.
The other part is what I’m calling deification, which refers to not just anthropomorphizing and thinking this is an entity of some kind, but feeling like, “No, I’m really onto something important, and this chatbot is some sort of superhuman intelligence or an oracle or something of that nature.” There’s a certain relationship with the chatbot that really regards it as, “No, it’s right,” despite the fact that other people or my family are telling me, like, “Hey, I think you’re going off the deep end.”
Dr. Joseph Pierre: “These are not people who are just dilettantes using here and there. These are people who are logging in hours and hours and hours of use.”
Do you have any precautionary advice for people? For sure. Everything in moderation, right? I don’t think that using this hours on end — certainly not to the exclusion of sleep — is healthy at all.
Number two, I really feel like there should be broader understanding of what chatbots are. I think you can make a very good argument that chatbots are not intelligent. Chatbots are simply generating text based on probabilities from this vast body of text that they have, and so they’re churning out stuff that seems on the surface to be convincing.
The people that get into trouble with this technology are people who don’t see it as such. There’s people who argue that we ought to think of chatbots as persons and that they’re actually thinking and reasoning. And that’s just simply not what they’re doing. And I think that tendency to misunderstand what this technology is creates a vulnerability for some people.
As individual users or as parents of potential users, how worried should people be about this technology and the way that people are developing delusions from it? I think it is fair to say that this is a, relatively speaking, rare phenomenon. I’m sure you’ve seen some of the data that have come out from OpenAI, where they’ve said, this is less than 0.07% or something like that.
Well, if we’re talking about [800] million users, or whatever the figure is that they’re quoting, we’re talking about hundreds of thousands of people, potentially, or tens of thousands of people. So very small percentage-wise risk — but if it’s 10,000 people who are developing psychosis, that’s a pretty big deal.
[But] it’s actually not my biggest concern about AI. As someone who has an interest beyond psychiatry about belief systems, I refer to the AI psychosis as actually sort of a canary in the coal mine.
There’s been a lot of coverage in recent months about the use of AI chatbots for propaganda. The potential to shape beliefs in ways that could become problematic — there’s a risk of that on a much bigger scale outside of psychosis.
It’s not [what] the AI chatbot [is] doing itself. It’s other people manipulating chatbots to get people to believe things that aren’t true.
In psychology, there’s this thing called the illusory truth effect. We know that repeated exposure to information, despite even a sense that [it] isn’t true, does shape belief and behavior. So that, to me, is the bigger issue that’s looming.
If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.
If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.
{{description}}
Email notifications are only sent once a day, and only if there are new matching items.
Your browser is out of date and potentially vulnerable to security risks.
We recommend switching to one of the following browsers:
Sorry, an error occurred.

Already Subscribed!

Cancel anytime
Account processing issue – the email address may already exist
Ben Pimentel’s new weekly newsletter covering the biggest technology stories in San Francisco, Silicon Valley and beyond. 
See what you missed during work. 
Receive our newspaper electronically with the e-edition email.
Receive occasional local offers from our website and its advertisers.
Sneak peek of the Examiner real estate section.
We’ll send breaking news and news alerts to you as they happen.

Thank you .
Your account has been registered, and you are now logged in.
Check your email for details.
Invalid password or account does not exist
Submitting this form below will send a message to your email with a link to change your password.
An email message containing instructions on how to reset your password has been sent to the email address listed on your account.
No promotional rates found.

Secure & Encrypted
Secure transaction. Secure transaction. Cancel anytime.

Thank you.
Your gift purchase was successful! Your purchase was successful, and you are now logged in.
A receipt was sent to your email.

source

Scroll to Top