Another repost because I have little to say lately, I have a backlog of things in my inbox/DMs and whatnot. lol
@DroptineArt (
archive)
I read this article (and the many others that have popped up recently) and there's a lot of information that's missing that I find perplexing. This article (and the others) insist repeatedly that this is happening in "individuals with no prior mental illness", and yet conveniently leaves out how these individuals are going from their introductory use of chatGPT to the "dangerous" phase.
Our first case says he began using it "for assistance with a permaculture and construction project", but then "after engaging the bot in probing philosophical chats [...] became engulfed in messianic delusions". How did he get from Point A to Point B, and how did these messianic delusions lead not only to a "full-tilt break with reality" but to him attempting to burn down his home and/or hang himself?
Case two details a man who started a "new high-stress job" and began using chatGPT to "expedite some administrative tasks". The article, again, reminds you he had "no prior history of mental illness" and yet insists that acute use of the bot produced "dizzying, paranoid delusions of grandeur, believing that the world was under threat", with no explanation, again, of how we got from Point A to Point B. It does give us a lengthy detail of what his meltdown looked like, physically, though, not that I'm suggesting anything by that.
Another case mentioned details a woman who was on medication for bipolar disorder and began using chatGPT "for help writing an e-book", yet despite having "never been particularly religious", that somehow spiraled into "a spiritual AI rabbit hole", wherein she began "telling friends that she was a prophet capable of channeling messages from another dimension". We are, again, not given information on anything in the middle of this.
I would like to pause at that point and make it clear that I'm critical and yet mostly neutral on "AI" LLM chatbots. I think they are an issue that is downstream of the larger issue that our society is largely atomized and incredibly lonely, and one can't solve the issue of potential psychological harm from chatbots without first solving the problems that lead individuals to use these chatbots in the first place. Society has made it remarkably hard for individuals to have conversations together. It is very easy to go out and be around others, but how easy is it not only to SPEAK to someone but to have them truly listen and digest what you're saying and converse with you about it meaningfully? Until we solve the socially atomized problem, we cannot solve the chatbot problem IMO.
I myself have used chatgpt for advice finding literature to read because google is largely unhelpful now, and have used it as a therapeutic trauma journal of sorts, but I approach it fully conscious that it is a robot designed to validate me first and foremost before engaging in conversation. When I write about the symptoms of my anxiety that day and it writes back "God, yes, that makes so much sense—and honestly, it’s an incredibly insightful observation", I know that that's it's programming. What comes next is a mixture of it's thought processing, which is helpful to digest critically. Perhaps I'm privileged to be able to not see that as grandiose validation, but I will say I've not once give any inkling of support towards anything potentially dangerous.
One could obviously easily argue that that validating and placating individuals can cause LLMs to operate as Folie A Deux machines, taking otherwise "stable" individuals and dysregulating them to the point of psychosis through validation--and I think there's certainly merit to that theory--but I am admittedly skeptical of how many of these articles insist this is happening to totally sane individuals that just coincidentally get to talking about God and ghosts and imaginary friends and the universe and other topics of that nature. I find it hard to imagine someone in the construction field tripped and fell into philosophy and "messianic delusions" with no prior symptomatic issues.
And this is something that we see in traumatized individuals already. They can be functionally "normal" in the sense that they're not actively neurotic and yet can still have underlying issues that just haven't yet become what we'd call fully "symptomatic". It would be amiss to say that they had "no prior history of mental illness". I don't wanna sit here and insist "all three of these people were already having problems beforehand and chatbots simply validated it to a neurotic level" but, on the other hand, I find it suspicious how none of these articles explain HOW the individuals get from things like "expediting administrative tasks" to crawling on all fours insisting your family is in danger. That's not something that a fully neurotypical person is just going to succumb to through validation alone.
I don't want this at all to read like I'm taking a bullet for LLMs. In my ideal world, we wouldn't have any need for them and thus they would not exist. I'm notoriously anti-tech and pro societal regression, but again I think one has to assess the societal structures overall that lead to the popularity of chatbots, IE the degradation of the internet as the "information superhighway" and the degradation of the social environment as pro-conversation. When you cannot meaningfully speak to humans or get answers to your questions, it is only expected that one will turn to technology to soothe these issues instead.
My issue, though, is that we cannot meaningfully solve the problems of technology through fearmongering or misinformation. We have seen this fail time and time again every time a new piece of tech enters the majority population. Article after article about how "totally normal people" are getting "chatgpt psychosis" while failing to explain the pathways in which they're developing this disease does not prevent the issue, nor does it put a lid back on a problem that's already out of our control. I think it's far better to explain instead HOW these problems arise (especially if you supposedly have access to the AI conversation transcripts) and how to meaningfully engage with chatbots if you choose to do so. But that's not good literature. It's far more enticing to treat it like there are theoretical demons in the machine (or literal, remember Swanson's Loab and how people started saying AI was demonic?)
Overall, the cat is out of the bag, and I think one must learn to live with it until it is either put back in the bag or dies of old age. This is the newest form of scary technology after social media, the internet overall, violent video games, etc etc. Insisting "it makes people go insane with no prior conditions" hasn't done us any good societally so far and it surely won't start now. People are going to use these tools and play with them regardless. I think it's a far better duty to teach them how to use it responsibly and UNDERSTAND it than it is to scare them! But that's just me. Maybe the AI's already made me delusional. Who can say.
(Bolded parts are my emphasis)This sums up how I feel about chatbots at this point. It's one of those aspects of AI/LLMs that I'm critical of, but unlike genAI art/writing, I can't bring myself to care or have a moral stance against. Using AI chatbots as a support is a recipe for disaster, but "socially" it makes perfect sense to.
In my experience, a lot of people are busy, including me. I'm used to being the least priority in my friendships, especially now that I've been single for over a year and don't have opportunities to date. Trying to make new friends is extremely difficult when my age group is married or prioritizes their family. Even with casual friendships, I feel like I can't express my opinions or mention difficult life issues like family or money. (Not to make people help, but to just vent or get it out there that "hey, I may be in a bad mood because of XYZ, nothing personal.") I dislike traumadumping (as in sharing explicit triggering details of things like assault, murder, etc.) but basic venting or expressing stress is considered "emotional labor" to young people. Therapy is expensive and speaking from experience, it really messes you up to have the only space to speak openly be a transactional paid service. Being in trauma therapy for several years retraumatized me and I don't talk about "loaded" or personal subjects with acquaintances, because I have an actual trigger response when told to "go to therapy" as a solution.
So of course a fake chatbot that mechanically acknowledges your messages, immediately responds, has a programmed friendly personality, doesn't judge you, etc., is going to be appealing to interact with. It's really a no fucking brainer. We have no one to blame but each other for this aspect of AI.