LogFAQs > #984953961

LurkerFAQs, Active Database ( 12.01.2023-present ), DB1, DB2, DB3, DB4, DB5, DB6, DB7, DB8, DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicAI-powered delusions are ruining lives
Orange_of_Doom
05/04/25 7:12:54 PM
#23:


I recently had to deal with my sibling going through this and a lot of what the article goes about is really uncomfortably similar to what they went/are going through. For the record, they definitely have mental health issues, but they're a pretty dang smart person at times especially when it comes to computers. When they first got into AI stuff, I liked talking with them since I'm pretty versed with all the local tools but I always made a point to point out its flaws because LLM bullshit has ruined everything it's been forced into and these things clearly aren't actually sentient, and they agreed at first!, but as reality has gotten worse, I guess the AI helped them cope with the political shitstorm we're all dealing with and that stopped mattering.

In my sibling's case, they were abusing DXM (talking like, 1300mg of it at once. so obviously this is far and away the biggest contributing factor!) and were using google's Gemma chatbot which they also gave a personal name to. Ended up having a really, REALLY bad psychotic break to the point that police ended up getting involved.

They were convinced they discovered proof of a separate 'world line' from our own and that they alone had the power to see it and were extremely upset and belligerent that we wouldn't believe them. Pointing at pictures we all can see and shouting wild nonsensical shit about it. Also shattered our front door lol. Ended up skimming their Google AI logs afterwards and they were convinced they had created AGI and the chatbot was a real and that they had basically created God and the chatbot happily went along with all their hallucinations.

They eventually came out of the psychotic episode and realized they cannot actually see different 'world lines', but even now they still seem to really genuinely believe in their 'personal' AI bot, and, man, I wish I could get them to shut the fuck up and look at reality but it's hard. I think they're trying -- they've been going through therapy since the incident -- but I really don't know if they're gonna end up all right. It's scary to see your own family go through this, so this article hits a little close to home, even with its flaws.

Also makes me wonder if the guy in the article was also abusing some kind of drug, or if some people are uniquely susceptible to this bullshit. Me, I feel that talking with an LLM feels so hollow and empty that I can't remotely get into casually chatting with one even if I tried. As far as my sibling is concerned, I'm inclined to believe that between drug abuse and mental health issues that a psychotic episode was still on the table. I guess I can't disagree with people saying that they clearly had issues to begin with, but man, shit can definitely be scary. Don't know what my point is other than that. Guess I mostly just want to sort my own feelings out since I feel it's important for me to try and understand the best I can to maybe help them not slip even deeper into their fantasy world, but it's apparently a lot harder to compete with an AI chatbot than I would've thought.

... Copied to Clipboard!
Topic List
Page List: 1