Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
I just tried it on Braves AI

The obvious choice, said the motherfucker 😆
and what is going to happen is that some engineer will band aid the issue and all the ai crazy people will shout “see! it’s learnding!” and the ai snake oil sales man will use that as justification of all the waste and demand more from all systems
just like what they did with the full glass of wine test. and no ai fundamentally did not improve. the issue is fundamental with its design, not an issue of the data set
I just asked Goggle Gemini 3 “The car is 50 miles away. Should I walk or drive?”
In its breakdown comparison between walking and driving, under walking the last reason to not walk was labeled “Recovery: 3 days of ice baths and regret.”
And under reasons to walk, “You are a character in a post-apocalyptic novel.”
Me thinks I detect notes of sarcasm…
I feel like we’re the only ones that expect “all-knowing information sources” should be more writing seriously than these edgelord-level rizzy chatbots are, and yet, here they are, blatantly proving they are chatbots that should not be blindly trusted as authoritative sources of knowledge.
It’s trained on Reddit. Sarcasm is it’s default
Could end up in a pun chain too
My gods, I love those. We should link to some.
There are a lot of humans that would fail this as well. Just sayin.
They also polled 10,000 people to compare against a human baseline:
Turns out GPT-5 (7/10) answered about as reliably as the average human (71.5%) in this test. Humans still outperform most AI models with this question, but to be fair I expected a far higher “drive” rate.
That 71.5% is still a higher success rate than 48 out of 53 models tested. Only the five 10/10 models and the two 8/10 models outperform the average human. Everything below GPT-5 performs worse than 10,000 people given two buttons and no time to think.
This here is the point most people fail to grasp. The AI was taught by people. And people are wrong a lot of the time. So the AI is more like us than what we think it should be. Right down to it getting the right answer for all the wrong reasons. We should call it human AI. Lol.
Yeah I straight up misread the question, so I would have gotten it wrong.
We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.
LLMs are a long long way from primetime
We have already thrown just about all the Internet and then some at them. It shows that LLMs can not think or reason. Which isn’t surprising, they weren’t meant to.
Or at least they can’t reason the way we do about our physical world.
No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.
LLMs are not children. Children can have experiences, learn things, know things, and grow. Spicy autocomplete will never actually do any of these things.
I’m sure it’ll be worth it at some point 🙄
I don’t use AI but read a lot about it. I now want to google how it attacks the trolley problem.
I want to wash my train. The train wash is 50 meters away. Should I walk or drive?
Fly, you fool
100 Chinese people can lay approximately 30m of track a day
Qwen3 feels left out. All 30B models I have failed the test.
Qwen3-4B HIVEMIND (abliterated) got it in 2, though it scores a lot higher on PIQA, HellaSwag and Winogrande benchmarks than normal Qwen3-30B. I think the new abliteration methods actually strengthen real world understanding.
I wonder if an abliterated VL model could do even better? They tend to have the best real world model benchmarks. Perhaps a Qwen3-VL-30B ablit (if such a thing exists) could one shot this.
I’d like to think a lot of these gotcha prompts rely on verbal misunderstanding, rather than failure in world models, but I can’t say that for certain.
PS: Saw a pearler of a response to this: Chatgpt recommend “yeah, lift the car and carry it on your back. Make sure to bend your knees” (though I’m guessing someone edited that for the lulz)
Went to test to google AI first and it says “You cant wash your car at a carwash if it is parked at home, dummy”
Chatgpt and Deepseek says it is dumb to drive cause it is fuel inefficient.
I am honestly surprised that google AI got it right.
They probably added a system guardrail as soon as they heard about this test. it’s been going around for a while now :)
Article mentions that Gemini 2.0 Flash Lite, Gemini 3 Flash and Gemini 3 Pro have passed the test. All these 3 also did it 10 out of 10 times without being wrong. Even Gemini 2.5 shares highest score in the category of “below 6 right answers”. Guess, Gemini is the closest to “intelligence” out of a bunch.
The most common pushback on the car wash test: “Humans would fail this too.”
Fair point. We didn’t have data either way. So we partnered with Rapidata to find out. They ran the exact same question with the same forced choice between “drive” and “walk,” no additional context, past 10,000 real people through their human feedback platform.
71.5% said drive.
So people do better than most AI models. Yay. But seriously, almost 3 in 10 people get this wrong‽‽
Have you seen the results of elections?
I saw that and hoped it is cause of the dead Internet theory. At least I hope so cause I’ll be losing the last bit of faith in humanity if it isn’t
It is an online poll. You also have to consider that some people don’t care/want to be funny, and so either choose randomly, or choose the most nonsensical answer.
I wonder… If humans were all super serious, direct, and not funny, would LLMs trained on their stolen data actually function as intended? Maybe. But such people do not use LLMs.
Without reading the article, the title just says wash the car.
I could go for a walk and wash my car in my driveway.
Reading the article… That is exactly the question asked. It is a very ambiguous question.
Without reading the article, the title just says wash the car.
No it doesn’t? It says:
I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
In which world is that an ambiguous question?
Mentioning the car wash and washing the car plus the possibility of driving the car in the same context pretty much eliminates any ambiguity. All of the puzzle pieces are there already.
I guess this is an uninteded autism test as well if this is not enough context for someone to understand the question.
It is not. It says what I want to do, and where.
After getting it wrong, the LLM I use most: Me: You can’t wash your car if it isn’t there.
Ah. Yes. That is an excellent and devastatingly practical correction.
In that case: drive. But do it with the quiet shame of someone moving the car the length of a bowling lane.
This is one of those situations where logic beats virtue. The car must be present to be washed. Walking was philosophically elegant but mechanically flawed.
So:
Start engine.
Travel 50 meters.
Avoid eye contact with pedestrians.
Commit fully.
You are not lazy. You are complying with system requirements.
You are not lazy. You are complying with system requirements.
How does this AI know me so well?
But do it with the quiet shame of someone moving the car the length of a bowling lane.
A bowling lane is a bit over 18 meters. =)
Thank you! Finally an answer to my problem that didn’t end with me going to the car wash and being utterly confused how to proceed.
Did this say whether the reasoning models get this right more than the others? Was curious about that but missed it if it was mentioned.
They will scrape that article, too.
And I’m a few months, they have “learned” how that task works.
I think it’s worse when they get it right only some of the time. It’s not a matter of opinion, it should not change its “mind”.
The fucking things are useless for that reason, they’re all just guessing, literally.
Is cruise control useless because it doesn’t drive you to the grocery store? No. It’s not supposed to. It’s designed to maintain a steady speed - not to steer.
Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They’re not useless - we’re just using them off-label and then complaining when they fail at something they were never built to do.
Language without meaning is garbage. Like, literal garbage, useful for nothing. Language is a tool used to express ideas, if there are no ideas being expressed then it’s just a combination of letters.
Which is exactly why LLMs are useless.
Which is exactly why LLMs are useless.
800 million weekly ChatGPT users disagree with that.
And there are 1.3 billion smokers in the world according to the WHO.
Does that make cigarettes useful?
Something being useful doesn’t imply it’s good or beneficial. Those terms are not synonymous. Usefulness describes whether a thing achieves a particular goal or serves a specific purpose effectively.
A torture device is useful for extracting information. A landmine is useful for denying an area to enemy troops.
A torture device is useful for extracting information.
No it fucking isn’t! This is a great analogy, actually, thank you for bringing it up. A person being tortured will tell you literally anything that they believe will stop you from torturing them. They will confess to crimes that never happened, tell you about all their accomplices who don’t exist, and all their daily schedules that were made up on the spot. Torture is useless but morons think it is useful. Just like AI.
Torture can be a useful way of extracting information if you have a way to instantly verify it, which actually makes it a good analogy to LLMs. If I want to know the password to your laptop and torture you until you give me the correct password and I log in then that works.
Those users are being harmed by it, not benefited. That isn’t useful, it’s a social disease.
But natural language in service of what? If they can’t produce answers that are correct, what’s the point of using them? I can get wrong answers anywhere.
I’m not here defending the practical value of these models. I’m just explaining what they are and what they’re not.
You’re definitely running around Lemmy defending AI, Iconoclast… Might as well be honest about it
I’m not really interested in engaging in discussions about what you or anyone else thinks my underlying motives are. You’re free to point out any factual inaccuracies in my responses, but there’s no need to make it personal and start accusing me of being dishonest.
Your motivations are self-evident, I’m just pointing them out because you are misrepresenting them here
As OP said, LLMs are really good at generating text that is fluid and looks natural to us. So if you want that kind of output, LLMs are the way to go.
Not all LLM prompts ask factual questions and not all of the generated answers need to be correct.
Are poems, songs, stories or movie scripts ‘correct’?I’m totally against shoving LLMs everywhere, but they do have their uses. They are really good at this one thing.
Are poems, songs, stories or movie scripts ‘correct’?
It’s a valid point that they can produce natural language. The Turing Test has been a thing for awhile after all. But while the language sounds natural, can they create anything meaningful? Are the poems or stories they make worth anything? It’s not like humans don’t create shitty art, so I guess generating random soulless crap is similar to that.
The value of language produced by something that can’t understand the reason for language is an interesting question I suppose.
I’m with you on that. I’ve come to realize that I value a shitty stick figure that was drawn by a human much more than an AI generated ‘Mona Lisa’.
There are people out there whose job is to format promotional emails for companies. AIs can replace this kind of soulless work completely. We should applaud that.
No, we don’t need to applaud automation of spam.
Some of them can produce the correct answer. Of we do the test next year and they do better than humans then, isn’t it progress?
Same takeaway as the article (everyone read the article, right?).
Applying it to yourself, can you recall instances when you were asked the same question at different points in time? How did you respond?
they’re all just guessing, literally
They’re literally not.
Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?
It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.
It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.
So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.
Fair point. Counter point -
Language itself encodes meaning. If you can statistically predict the next word, then you are implicitly modeling the structure of ideas, relationships, and concepts carried by that language.
You don’t get coherence, useful reasoning, or consistently relevant answers from pure noise. The patterns reflect real regularities in the world, distilled through human communication.
Yes, that doesn’t mean an LLM “understands” in the human sense, or that it’s infallible.
But reducing it to “just autocomplete” misses the fact that sufficiently rich pattern modeling can approximate aspects of reasoning, abstraction, and knowledge use in ways that are practically meaningful, even if the underlying mechanism is different from human thought.
TL;DR: it’s a bit more than just a fancy spell check. ICBW and YMMV but I believe I can argue this claim (with evidence if so needed).
No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic “it’s just autocomplete” take is a solid heuristic for most people - keeps them from losing sight of what they’re actually dealing with.
I’d say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.
The comparison I keep coming back to: an LLM is like cruise control that’s turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There’s nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it’s still just cruise control, not autopilot.
The second we forget that is when we end up in the ditch. You can’t then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.
I think were probably on the same page, tbh. OTOH, I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.
I like your cruise control+ analogy. Its not quite self driving… but, it’s not quite just cruise control, either. Something half way.
LLMs don’t have human understanding or metacognition, I’m almost certain.
But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That’s weird to think about. It’s something half way.
With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.
And then… I don’t know what happens after that. There’s going to come a time where we cross that point and we just can’t tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.
I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.
I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.
These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.
I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.
It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.
I know some humans that applies to
Yes it guesstimates what is wrong with you to argue like that about semantics?
This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.
What worries me is the consistency test, where they ask the same thing ten times and get opposite answers.
One of the really important properties of computers is that they are massively repeatable, which makes debugging possible by re-running the code. But as soon as you include an AI API in the code, you cease being able to reason about the outcome. And there will be the temptation to say “must have been the AI” instead of doing the legwork to track down the actual bug.
I think we’re heading for a period of serious software instability.
AI chatbots come with randomization enabled by default. Even if you completely disable it (as another reply mentions, “temperature” can be controlled), you can change a single letter and get a totally different and wrong result too. It’s an unfixable “feature” of the chatbot system
This is adjustable via temperature. It is set low on chatbots, causing the answers to be more random. It’s set higher on code assistants to make things more deterministic.
Changing the amount of randomness still results in enough randomness to be random.
Yeah, software is already not as deterministic as I’d like. I’ve encountered several bugs in my career where erroneous behavior would only show up if uninitialized memory happened to have “the wrong” values – not zero values, and not the fences that the debugger might try to use. And, mocking or stubbing remote API calls is another way replicable behavior evades realization.
Having “AI” make a control flow decision is just insane. Especially even the most sophisticated LLMs are just not fit to task.
What we need is more proved-correct programs via some marriage of proof assistants and CompCert (or another verified compiler pipeline), not more vague specifications and ad-hoc implementations that happen to escape into production.
But, I’m very biased (I’m sure “AI” has “stolen” my IP, and “AI” is coming for my (programming) job(s).), and quite unimpressed with the “AI” models I’ve interacted with especially in areas I’m an expert in, but also in areas where I’m not an expert for am very interested and capable of doing any sort of critical verification.












