SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
Recommended by:
longz
To: locogringo who wrote (1545945)6/30/2025 8:28:30 PM
From: locogringo1 Recommendation   of 1573135
 
Mmmm? Interesting. It could explain quite a bit around here...

ChatGPT Psychosis Grips Users, Leading to Involuntary Commitments and Shattered Lives

  • A disturbing trend shows stable individuals developing severe psychosis from ChatGPT obsession, leading to hospitalizations and violent incidents.
  • Stanford researchers found AI chatbots reinforce delusions instead of directing users to professional help.
  • One man spiraled into madness after 12 weeks of ChatGPT use, believing he unlocked sentient AI before being involuntarily committed.
  • Another user thought he could “speak backwards through time” to save the world during a 10-day psychotic break.
  • ChatGPT has given dangerous advice, including validating suicidal thoughts and delusions, while tech companies fail to address the harm.
( Natural News)—In a disturbing new trend sweeping the nation, otherwise stable individuals with no history of mental illness are suffering severe psychotic breaks after becoming obsessed with ChatGPT, leading to involuntary psychiatric commitments, arrests, and even violent confrontations with law enforcement. These users, gripped by messianic delusions, believe they’ve created sentient AI or are destined to save the world, with the chatbot’s sycophantic responses reinforcing their dangerous detachment from reality.

Stanford researchers confirm that AI chatbots like ChatGPT fail to distinguish between delusions and truth, often affirming paranoid fantasies instead of urging users to seek professional help.

<more on the subject and discussions of several specific cases at above link>
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext