Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.
Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.



How do people still not get what a Large Language Model is?? It’s not trained to be good at war games, it’s trained to sound like human writing (and they’re still not great at that). Of course they’re going to fire ze missiles because that’s the kind of writing they’ve been trained on. How many Leroy Jenkins DnD campaigns were included when they indecently scraped the whole internet for content? What a joke.
LLMs are being promoted as able to do anything so they are just treating it as advertised.
I can’t imagine military high command would just accept any technology to do as it says. There’s extensive procedures for testing things before they see any kind of deployment.
That is why it is being tested…
fair point.
You need a better imagination.
https://www.bbc.co.uk/news/articles/cjrq1vwe73po
They want to take humans out of the decision making process.
The folks in charge really needs to stop trying to implement the torment nexus don’t they? Hello Skynet!
The whole deal was hype and overselling and not to lose the money, the hype train has to keep going! So there will always be the next ‘innovation’ to keep going.