I’ve said it time and time again: AIs aren’t trained to produce correct answers, but seemingly correct answers. That’s an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can’t easily verify the answer. But it seems plausible so you assume it to be correct.
Thankfully, AI is bad at maths for exactly this reason. You don’t have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop’s claims that it is vastly superior to previous models.
I’ve been through the cycle of the AI companies repeatedly saying “now it’s perfect” only admitting it’s complete trash when they release the next iteration and claim “yeah it was broken, we admit, but now it’s perfect” so many times now…
Problem being there’s a massive marketing effort to gaslight everyone and so if I point it out in any vaguely significant context, I’m just not keeping up and most only have dealt with the shitty ChatGPT 5.1, not the more perfect 5.2. Of course in my company they are about the Anthropic models so it is instead Opus 4.5 versus 4.6 now. Even proving the limitations in trying to work with 4.6 gives anthropic money, and at best I earn a “oh, those are probably going to be fixed in 4.7 or 5 or whatever”.
Outsiders are used to traditional software that has mistakes, but those are straightforward to address so a close but imperfect software can hit the mark in updates. LLMs not working that way doesn’t make sense. They use the same version number scheme after all, so expectations should be similar.
My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.
The problem is that we’ve had a culture of people who don’t know things very well control the purse strings relevant to those things.
So we have executives who don’t know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more “executive” like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn’t support their decision, just a “yes, and…” result agreeing with whatever dumbass request they thought would be correct and simple.
Finally, no one talking back to them and making their life difficult and casting doubt on their competency. With the biggest billionaires telling them this is the right way to go, as long as they keep sending money their way.
The problem is that we’ve had a culture of people who don’t know things very well control the purse strings relevant to those things.
I mean that has been the case for a long time, AI may enhance the effect of it, but human stupidity is nothing new.
So we have executives who don’t know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more “executive” like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn’t support their decision, just a “yes, and…” result agreeing with whatever dumbass request they thought would be correct and simple.
Once again, yes men have also been a historic phenomenon, and yes AI might speed this up, but it is nothing new per se.
Ai is a tool, not a perfect one, heck most of the time, barely functional, but it is a tool and in order to use it, you need to understand what it can do, and what it can’t do.
I think if you’re aware of the environmental impact, learn how to use it responsibly and avoid many of it pitfalls, together with a critical mindset, it can be usable for some cases.
It can be useful, sure, and yes, the myopic, self-centered lying executive is nothing new, but there are big groups now thinking they can remove whatever semblance of a check on executive decisions might be there.
The problem is, every time you use it, you become more passive. More passive means less alert to problems.
Look at all the accidents involving “safety attendants” in self-driving cars. Every minute they let AI take the wheel, they become more complacent. Maaaybe I’ll sneak a peak at my phone. Well, haven’t gotten into an accident in a month, I’ll watch a video. In the corner of my vision. Hah, that was good, gotta leave a commen — BANG!
Even worse is that over time, the seemingly correct answers will drift further away from actually correct answers. I’m the best case, it’s because people expect the wrong answers as that’s all they’ve been exposed to. Worse cases would be the answers skew toward a specific end that AI maker wants people to think.
They are designed to convince people. That’s all they do. True, or false, real or fake, doesn’t matter, as long as it’s convincing. They’re like the ultimate, idealized sociopath and con artist. We are being conned by a software designed to con people.
Depending on the material, the LLM can be faster. I have used an LLM to extract viable search terms to then go and read the material myself.
I never trust the summary, but it frequently gives me clues as to what keywords could take me to the right area of a source material. Internet articles that stretch brief content into tedious mess, documentation that is 99% something I already know, but I need something buried in the 1%.
Was searching for a certain type of utility and traditional Internet searches were flooded with shitware that wasn’t meeting the criteria I wanted, LLM successfully zeroed in on just the perfect GitHub project.
Then as a reminder to never trust the results, I queried how to make it do a certain thing and it mentioned a command option that seemed like a dumb name that was opposite of what I asked for if it did work and not only would it have been opposite, no such option existed.
My work is technically dense and I read all day.
It’s sometimes nice when I’m mentally exhausted to see if it’s worth the effort to dig deeper in a 10 second upload. That’s all I’m getting at.
I just got around to watching some of the ads that the big AI companies aired during the Superbowl. Each time I was thinking “wow, if this is true, this person is an idiot and is in for a world of trouble”.
Like, there was one where a young farmer was supposedly taking over the family farm from her grandfather or something. She said something like “I uploaded all our data to ChatGPT and now I do what it tells me to do.” If that’s the case, wow. That farm is going to fail.
Another one was some guy who ran some kind of a machinist’s shop, and was claiming that the bookkeeping and inventory control the shop used was really old fashioned. So, he had ChatGPT create him a whole bunch of new part numbers to make online ordering easier. Again, wow. You’re trusting this key part of your business to a machine that just randomly makes stuff up?
What they can do is generate code that is totally deterministic, and then defer to those results. The fact these people aren’t doing that just makes them negligent.
I’ve said it time and time again: AIs aren’t trained to produce correct answers, but seemingly correct answers. That’s an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can’t easily verify the answer. But it seems plausible so you assume it to be correct.
Thankfully, AI is bad at maths for exactly this reason. You don’t have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop’s claims that it is vastly superior to previous models.
I’ve been through the cycle of the AI companies repeatedly saying “now it’s perfect” only admitting it’s complete trash when they release the next iteration and claim “yeah it was broken, we admit, but now it’s perfect” so many times now…
Problem being there’s a massive marketing effort to gaslight everyone and so if I point it out in any vaguely significant context, I’m just not keeping up and most only have dealt with the shitty ChatGPT 5.1, not the more perfect 5.2. Of course in my company they are about the Anthropic models so it is instead Opus 4.5 versus 4.6 now. Even proving the limitations in trying to work with 4.6 gives anthropic money, and at best I earn a “oh, those are probably going to be fixed in 4.7 or 5 or whatever”.
Outsiders are used to traditional software that has mistakes, but those are straightforward to address so a close but imperfect software can hit the mark in updates. LLMs not working that way doesn’t make sense. They use the same version number scheme after all, so expectations should be similar.
Both of those can be true.
I mean yeah, but they specifically mentioned its amazing performance in tasks requiring reasoning
My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.
The problem is that we’ve had a culture of people who don’t know things very well control the purse strings relevant to those things.
So we have executives who don’t know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more “executive” like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn’t support their decision, just a “yes, and…” result agreeing with whatever dumbass request they thought would be correct and simple.
Finally, no one talking back to them and making their life difficult and casting doubt on their competency. With the biggest billionaires telling them this is the right way to go, as long as they keep sending money their way.
I mean that has been the case for a long time, AI may enhance the effect of it, but human stupidity is nothing new.
Once again, yes men have also been a historic phenomenon, and yes AI might speed this up, but it is nothing new per se.
Ai is a tool, not a perfect one, heck most of the time, barely functional, but it is a tool and in order to use it, you need to understand what it can do, and what it can’t do.
I think if you’re aware of the environmental impact, learn how to use it responsibly and avoid many of it pitfalls, together with a critical mindset, it can be usable for some cases.
It can be useful, sure, and yes, the myopic, self-centered lying executive is nothing new, but there are big groups now thinking they can remove whatever semblance of a check on executive decisions might be there.
And on top of that, the people who don’t know things very well generated lots of the material the LLMs were trained on in the first place.
Can’t really blame the models for realizing much of human knowledge is bullshit and acting accordingly.
The problem is, every time you use it, you become more passive. More passive means less alert to problems.
Look at all the accidents involving “safety attendants” in self-driving cars. Every minute they let AI take the wheel, they become more complacent. Maaaybe I’ll sneak a peak at my phone. Well, haven’t gotten into an accident in a month, I’ll watch a video. In the corner of my vision. Hah, that was good, gotta leave a commen — BANG!
I prefer to say “algorithmically common” instead of “seemingly correct” but otherwise agree with you.
I use “mathmatical approximations of correct answers”
But that’s wrong. It’s not trained on correct answers. It’s trained on whatever happens to be out there in the world.
It’s mathematical approximations of words that are likely to be found near that question.
Even worse is that over time, the seemingly correct answers will drift further away from actually correct answers. I’m the best case, it’s because people expect the wrong answers as that’s all they’ve been exposed to. Worse cases would be the answers skew toward a specific end that AI maker wants people to think.
They are designed to convince people. That’s all they do. True, or false, real or fake, doesn’t matter, as long as it’s convincing. They’re like the ultimate, idealized sociopath and con artist. We are being conned by a software designed to con people.
I use it to summarize stuff sometimes, and I honestly spend almost as much time checking it’s accurate than I would if I had just read and summarized.
It is useful for ‘What does this contain?’ so I can see if I need to read something. Or rewording something I have made a pig’s ear out of.
I wouldn’t trust it for anything important.
The most important thing to do if you do use AI is to not ask leading questions. Keep them simple and direct
Skimming and scanning texts is a skill that achieves the same goal more quickly than using an unreliable bullshit generator.
Depending on the material, the LLM can be faster. I have used an LLM to extract viable search terms to then go and read the material myself.
I never trust the summary, but it frequently gives me clues as to what keywords could take me to the right area of a source material. Internet articles that stretch brief content into tedious mess, documentation that is 99% something I already know, but I need something buried in the 1%.
Was searching for a certain type of utility and traditional Internet searches were flooded with shitware that wasn’t meeting the criteria I wanted, LLM successfully zeroed in on just the perfect GitHub project.
Then as a reminder to never trust the results, I queried how to make it do a certain thing and it mentioned a command option that seemed like a dumb name that was opposite of what I asked for if it did work and not only would it have been opposite, no such option existed.
Lol. Your advice: learn to read, noob
My work is technically dense and I read all day. It’s sometimes nice when I’m mentally exhausted to see if it’s worth the effort to dig deeper in a 10 second upload. That’s all I’m getting at.
I just got around to watching some of the ads that the big AI companies aired during the Superbowl. Each time I was thinking “wow, if this is true, this person is an idiot and is in for a world of trouble”.
Like, there was one where a young farmer was supposedly taking over the family farm from her grandfather or something. She said something like “I uploaded all our data to ChatGPT and now I do what it tells me to do.” If that’s the case, wow. That farm is going to fail.
Another one was some guy who ran some kind of a machinist’s shop, and was claiming that the bookkeeping and inventory control the shop used was really old fashioned. So, he had ChatGPT create him a whole bunch of new part numbers to make online ordering easier. Again, wow. You’re trusting this key part of your business to a machine that just randomly makes stuff up?
Plausible confabulation machines.
What they can do is generate code that is totally deterministic, and then defer to those results. The fact these people aren’t doing that just makes them negligent.