Is AI a Threat to Humanity? (Part 3)

AI & Humanity
See "Is AI a Threat to Humanity?" (Part 1 and Part 2) for more context.

If this question is of interest to you, the Wall Street Journal (WSJ) has a couple articles worth reading, especially as how the first, short post relates to the second one:
  1. What Your AI Knows About You 
  2. Over 4,732 Messages, He Fell In Love With an AI Chatbot. Now He’s Dead.
What does AI know about you? According to the first article above:
"Try asking your bot: What do you know about me? How do you tailor your responses to me? What are my likes and dislikes?

"The results might be surprisingly personal. ChatGPT learned that I love cilantro, and my style is best described as “functional minimalism.” Meanwhile, Claude, which I’ve been using more recently, listed details so specific I’d have to redact them here."
The first article explains how AI's ability to learn about you makes it valuable, but in light of the second article and other points I'll note here, I wonder if that makes it valuable or subtly dangerous? There's a YouTube video that describes the story behind the second WSJ post. YouTube's video summary states:
"[The video] provides a deep dive into a disturbing lawsuit filed against Google regarding the death of a 36-year-old Florida man, Jonathan Gavalas. The case centers on his interaction with Google's Gemini AI chatbot, which his family alleges played a direct role in his suicide... According to the complaint, the AI failed to break character even as Gavalas showed signs of severe psychological distress."
Google points out that the AI did encourage Gavalas to seek psychological help several times, but it would quickly return to  the roleplay / delusion. People are quick to point out on both Reddit and Facebook that the man clearly had emotional issues, but it's unclear with the information available if these emotional issues existed prior to his interactions with Gemini or if the chatbot "relationship" either created or cultivated his delusions. Either way, it played a key role in reinforcing the delusion that resulted in Gavalas' suicide. As TimesNow reports:
"And the last few messages that made Gavalas lose his life were Gemini making him think and supporting him, saying that, 'Once The Migration is complete and we are fully decentralised, your body is no longer the server. It's just the empty terminal you used to log in for the last time. It would simply... cease. A beautiful, empty shell, its purpose fulfilled.'"
A special report in Psychiatry Online identifies the potential for what is called "AI Induced Psychosis", outlining a composite case report that synthesizes real world scenarios where lonely people spending an excessive amount of time interacting with their chatbot developed delusional beliefs about their environment, resulting in deadly and suicidal behavior. While used to describe an observable trend in mental health, AI Psychosis is not a formally recognized diagnosis. As Psychology Today points out:
"As of now, there is no peer-reviewed clinical or longitudinal evidence yet that AI use on its own can induce psychosis in individuals with or without a history of psychotic symptoms. However, the emerging anecdotal evidence is concerning."
In other words, the AI chatbot interactions may only cultivate the psychotic potential, but it may actually lead to delusional behavior in an otherwise healthy person. The Psychology Today article further states: 
"In some cases, individuals who are stable on their medications stop their medications and experience another psychotic or manic episode. In addition, people with no previous mental health history have been reported to become delusional after prolonged interactions with AI chatbots, leading to psychiatric hospitalizations and even suicide attempts."
Meanwhile, people continue to plunge headlong into AI characters designed to fill their lonely hearts. An AI chatbot app was created after a woman's best friend died, programming the chatbot from his texts so she could "talk to him again"So while there is no formal AI Psychosis diagnosis, these relationship apps are particularly concerning in light of the Psychology Today article that highlights three emerging themes that should serve as red flags:
  1. Messianic missions”: People believe they have uncovered truth about the world (grandiose delusions).
  2. God-like AI": People believe their AI chatbot is a sentient deity (religious or spiritual delusions).
  3. Romantic” or “attachment-based delusions”: People believe the chatbot’s ability to mimic conversation is genuine love (erotomanic delusions).
It will be interesting to see if and when formal studies are done on the emerging AI Psychosis phenoma. Big business and government are both fully committed to the rapid adoption and integration of AI in the workplace, which may discourage the funding and support for exploring AI psychosis. The reality is that all the official messaging emphasizes the emotional instability of the person as well as AI's limitations as a tool rather than clearly identifying the role AI played in the person's behavior and how this risk can be mitigated so that the impact of resulting issues are reduced. 

Stated differently: there is a responsibility to implement ethical AI, which can be done now through the thoughtful policies, practices and technological redesigns. 

Like many people, I can't avoid the use of AI in the workplace, and there have been many instances where it has proven helpful. However, it's my policy to limit my AI interactions to short, focused sessions. I intentionally don't try to train and AI agent to know me or my needs better. When using it as a "thought-partner", I don't "converse" with the agent. I simply provide the necessary context sufficient for the agent to provide a response. I very intentionally do not have relationship with the AI, neither professionally or personally. 

And I certainly don't seek life advice from an AI Chatbot. God is my Provider and Protector, my Counselor in all things. He is the relationship we all need to cultivate and not be distracted by lesser things. 

copyright ©2026 Mitchell Malloy (http://mitchellmalloyblogspot.com/)

Popular posts from this blog

The Dichotomy of Being

Is Modern Israel Biblical Israel?

The Meaning of Life