I’ll highlight this:
At one point, Soelberg uploaded an image of a receipt from a Chinese restaurant and asked ChatGPT to analyze it for hidden messages. The chatbot found references to “Soelberg’s mother, his ex-girlfriend, intelligence agencies and an ancient demonic sigil,” according to the Journal.
Soelberg worked in marketing at tech companies like Netscape, Yahoo, and EarthLink, but had been out of work since 2021, according to the newspaper. He divorced in 2018 and moved in with his mother that year. Soelberg reportedly became more unstable in recent years, attempting suicide in 2019, and getting picked up by police for public intoxication and DUI. After a recent DUI in February, Soelberg told the chatbot that the town was out to get him, and ChatGPT allegedly affirmed his delusions, telling him, “This smells like a rigged setup.”



Also why you don’t automatically treat anything an LLM tells you as factual. LLMs are just a fancy guesser of the next word that would appear in a sentence based upon probability of what its been trained with and with a mechanism to introduce some randomness on which word it picks. I did a 60 second explanation of the basic concept how LLMs worked to a coworker this week. He was kind of shocked how stupid LLMs actually are once he got the explanation.
deleted by creator
Except they’re not. LLMs are not that smart. They frequently end up doing that but they aren’t designed to do it. They only guess the next word in a sentence, then guess the word after that, etc. So if its been fed conspiracy garbage as training data, some of the most probable words or terms in the next sentence will be similar conspiracy garbage words and phrases.
So they aren’t designed to do conspiracy stuff, they’re just given training data that contains that (along with lots of other unrelated subjects and sources).
That’s a big part of the “generative” of “generative AI”. Generative AI is LLMs and AI image generation models. They are made to create something that didn’t exist before.
deleted by creator