I tested 9 flagships (Claude 4.6, GPT-5.2, Gemini 3.1 Pro, Kimi K2.5, etc.) in my own mini-benchmark with novel tasks, web search disabled and zero training contamination and no cheating possible.
TL;DR: Claude 4.6 is currently the best reasoning model, GPT-5.2 is overrated, and open-source is catching up fast, in particular Moonshot.ai’s Kimi K2.5 seems very capable.



In my opinion the proper solution is to ask for the constraints. Similar to the “walk or drive to the car wash” problem LLMs still tend to get confused but a familiar format and don’t notice this problem doesn’t make sense. You can actually play around with different examples to see how crazy the problem has to get form an LLM to refuse to answer and what biases or constraints does it have. Even if they assume some constraints they fail to solve this puzzle surprisingly often (like I showed for Sonnet 4.6 in other comment).