There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs—including some post-trained explicitly for persuasion—to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods—which boosted persuasiveness by as much as 51% and 27% respectively—than from personalization or increasing model scale. We further show that these methods increased persuasion by exploiting LLMs’ unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy.
Terrifying
We’ll probably have to give up text based forums over this
Why would AI stay in text format? Fake videos already live in Instagram and Twitter. YouTube is full of fake AI voices reading AI scripts.
Best option is to quit being online, go outside, and meet neighbors.
For real. But it’ll be while they can spin up fake videos like in Running Man fast enough to hood anyone
I skimmed through the PDF, and didn’t find more info about the 700+ supposed “political” issues, other than that they relate to the UK somehow.
What could those issues be? Like, try to enumerate all supposedly “political” issues in your head, and you will get to stuff like “compassionate death” (a still debated topic that gets a decent amount of news coverage in the UK), and you would still be nowhere near a 100.
I think using a larger number hoping for a larger impact may have backfired 😉, or maybe no one clocked the bullshit.
The proposition that the modern nation state humanoid population is so fundamentally divided and extremely varied in epistemological thought, is itself a hilarious one to begin with of course.
The issues are listed in Supplementary Table S141 (p. 75 in the SI; 10 issues) and in https://github.com/kobihackenburg/scaling-conversational-AI/blob/main/issue_stances.csv (697 issues)