Are you feeling it?
I hear itâs close: two years, five yearsâmaybe next year! And I hear itâs going to change everything: it will cure disease, save the planet, and usher in an age of abundance. It will solve our biggest problems in ways we cannot yet imagine. It will redefine what it means to be human.
Waitâwhat if thatâs all too good to be true? Because I also hear it will bring on the apocalypse and kill us all âŠ
Either way, and whatever your timeline, something big is about to happen.
We could be talking about the Second Coming. Or the day when Heavenâs Gaters imagined theyâd be picked up by a UFO and transformed into enlightened aliens. Or the moment when Donald Trump finally decides to deliver the storm that Q promised. But no. Weâre of course talking about artificial general intelligence, or AGIâthat hypothetical near-future technology that (I hear) will be able to do pretty much whatever a human brain can do.
This story is part of MIT Technology Reviewâs series âThe New Conspiracy Age,â on how the present boom in conspiracy theories is reshaping science and technology.
For many, AGI is more than just a technology. In tech hubs like Silicon Valley, itâs talked about in mystical terms. Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of âFeel the AGI!â at team meetings. And he feels it more than most: In 2024, he left OpenAI, whose stated mission is to ensure that AGI benefits all of humanity, to cofound Safe Superintelligence, a startup dedicated to figuring out how to avoid a so-called rogue AGI (or control it when it comes). Superintelligence is the hot new flavorâAGI but better!âintroduced as talk of AGI becomes commonplace.
Sutskever also exemplifies the mixed-up motivations at play among many self-anointed AGI evangelists. He has spent his career building the foundations for a future technology that he now finds terrifying. âItâs going to be monumental, earth-shatteringâthere will be a before and an after,â he told me a few months before he quit OpenAI. When I asked him why he had redirected his efforts into reining that technology in, he said: âIâm doing it for my own self-interest. Itâs obviously important that any superintelligence anyone builds does not go rogue. Obviously.â
Heâs far from alone in his grandiose, even apocalyptic, thinking.
Every age has its believers, people with an unshakeable faith that something huge is about to happenâa before and an after that they are privileged (or doomed) to live through.
For us, thatâs the promised advent of AGI. People are used to hearing that this or that is the next big thing, says Shannon Vallor, who studies the ethics of technology at the University of Edinburgh. âIt used to be the computer age and then it was the internet age and now itâs the AI age,â she says. âItâs normal to have something presented to you and be told that this thing is the future. Whatâs different, of course, is that in contrast to computers and the internet, AGI doesnât exist.â
And thatâs why feeling the AGI is not the same as boosting the next big thing. Thereâs something weirder going on. Hereâs what I think: AGI is a lot like a conspiracy theory, and it may be the most consequential one of our time.
I have been reporting on artificial intelligence for more than a decade, and Iâve watched the idea of AGI bubble up from the backwaters to become the dominant narrative shaping an entire industry. A onetime pipe dream now props up the profit lines of some of the worldâs most valuable companies and thus, you could argue, the US stock market. It justifies dizzying down payments on the new power plants and data centers that weâre told are needed to make the dream come true. Fixated on this hypothetical technology, AI firms are selling us hard.
Just listen to what the heads of some of those companies are telling us. AGI will be as smart as an entire âcountry of geniusesâ (Dario Amodei, CEO of Anthropic); it will kick-start âan era of maximum human flourishing, where we travel to the stars and colonize the galaxyâ (Demis Hassabis, CEO of Google DeepMind); it will âmassively increase abundance and prosperity,â even encourage people to enjoy life more and have more children (Sam Altman, CEO of OpenAI). Thatâs some product.
Or not. Donât forget the flip side, of course. When those people are not shilling for utopia, theyâre saving us from hell. In 2023, Amodei, Hassabis, and Altman all put their names to a 22-word statement that read: âMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.â Elon Musk says AI has a 20% chance of annihilating humans. Related Story What is AI?
âIâve noticed recently that superintelligence, which I thought was a concept you definitely shouldnât mention if you want to be taken seriously in public, is being thrown around by tech CEOs who are apparently planning to build it,â says Katja Grace, lead researcher at AI Impacts, an organization that surveys AI researchers about their field. âI think itâs easy to feel like this is fine. They also say itâs going to kill us, but theyâre laughing while they say it.â
You have to admit it all sounds a bit tinfoil hat. If youâre building a conspiracy theory, you need a few things in the mix: a scheme thatâs flexible enough to sustain belief even when things donât work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.
AGI just about checks all those boxes. The more you poke at the idea, the more it starts to look like a conspiracy. Itâs not, of courseânot exactly. And Iâm not drawing this parallel to dismiss the very real, often jaw-dropping results achieved by many people in this field, including (or especially) the AGI believers.
But by zooming in on things that AGI has in common with genuine conspiracies, I think we can bring the whole concept into better focus and reveal it for what it is: a techno-utopian (or techno-dystopianâpick your pill) fever dream that got its hooks into some pretty deep-seated beliefs that have made it hard to shake.
This isnât just a provocative thought experiment. Itâs important to question what weâre told about AGI because buying into the idea isnât harmless. Right now, AGI is the most important narrative in techâand, to some extent, in the global economy. We canât make sense of whatâs going on in AI without understanding where the idea of AGI came from, why it is so compelling, and how it shapes the way we think about technology overall.
I get it, I get itâcalling AGI a conspiracy isnât a perfect analogy. It will also piss a lot of people off. But come with me down this rabbit hole and let me show you the light. How Silicon Valley got AGI-pilled It had a ring to it
A typical conspiracy theory usually starts out on the fringes. Maybe itâs just a couple of people posting on a message board, gathering âevidence.â Maybe itâs a few people out in the desert with binoculars waiting to spot some bright lights in the sky. But some conspiracy theories get lucky, if you will: They start to percolate more widely; they start to become a bit more acceptable; they start to influence people in power. Maybe itâs the UFOs (ahem, sorry, âunidentified aerial phenomenaâ) that are now formally and openly discussed in government hearings. Maybe itâs vaccine skepticism (yes, a much more dangerous example) that becomes official policy. And itâs impossible to ignore that artificial general intelligence has followed a pretty similar trajectory to its more overtly conspiratorial brethren.
Letâs go back to 2007, when AI wasnât sexy and it wasnât cool. Companies like Amazon and Netflix (which was still sending out DVDs in the mail) were using machine-learning models, proto-organisms to todayâs LLM behemoths, to recommend movies and books to customers. But that was more or less it.
Ben Goertzel had far bigger plans. About a decade earlier, the AI researcher had set up a dot-com startup called Webmind to train what he thought of as a kind of digital baby brain on the early internet. Childless, Webmind soon went bust.
But Goertzel was an influential figure in a fringe community of researchers who had dreamed for years of building humanlike artificial intelligence, an all-purpose computer program that could do many of the things people can do (and do them better). It was a vision that went far beyond the kind of tech that Netflix was experimenting with.
Goertzel wanted to put out a book promoting that vision, and he needed a name that would set it apart from the humdrum AI of the time. A former Webmind employee named Shane Legg suggested Artificial General Intelligence. It had a ring to it.
A few years later, Legg cofounded DeepMind with Demis Hassabis and Mustafa Suleyman. But to most serious researchers at the time, the claim that AI would one day mimic human abilities was a bit of a joke. AGI used to be a dirty word, Sutskever told me. Andrew Ng, founder of Google Brain and former chief scientist at the Chinese tech giant Baidu, told me he thought it was loony.
So what happened? I caught up with Goertzel last month to ask how a fringe idea went from crackpot to commonplace. âIâm sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,â he said. (Translation: Itâs complicated.)
Goertzel reckons a few things took the idea mainstream. The first is the Conference on Artificial General Intelligence, an annual meeting of researchers that he helped set up in 2008, the year after his book was published. The conference was often coordinated with top mainstream academic meetups, such as the Association for the Advancement of Artificial Intelligence conference and the International Joint Conference on Artificial Intelligence. âIf I just published a book with that name AGI, it possibly would have just come and gone,â says Goertzel. âBut the conference was circling through every year, with more and more students coming.â
Next is Legg, who took the term with him to DeepMind. âI think they were the first mainstream corporate entity to talk about AGI,â says Goertzel. âIt wasnât the main thing they were harping on, but Shane and Demis would talk about it now and then. That was certainly a source of legitimation.â
When I first talked to Legg about AGI five years ago, he said: âTalking about AGI in the early 2000s put you on the lunatic fringe ⊠Even when we started DeepMind in 2010, we got an astonishing amount of eye-rolling at conferences.â But by 2020 the wind had changed. âSome people are uncomfortable with it, but itâs coming in from the cold,â he told me.
The third thing Goertzel points to is the overlap between early AGI evangelists and Big Tech power brokers. In the years between shutting down Webmind and publishing that AGI book, Goertzel did some work with Peter Thiel at Thielâs hedge fund Clarium Capital. âWe talked a bunch,â says Goertzel. He recalls spending a day with Thiel at the Four Seasons in San Francisco. âI was trying to drum AGI into his head,â says Goertzel. âBut then he was also hearing from Eliezer how AGI is going to kill everybody.â Enter the doomers
Thatâs Eliezer Yudkowsky, another influential figure who has done at least as much as Goertzel, if not more, to push the idea of AGI. But unlike Goertzel, Yudkowsky thinks thereâs a very high chanceâ99.5% is one number he throws outâthat the development of AGI will be a catastrophe.
In 2000, Yudkowsky cofounded a nonprofit research outfit called the Singularity Institute for Artificial Intelligence (later renamed the Machine Intelligence Research Institute), which pretty quickly dedicated itself to preventing doomer scenarios. Thiel was an early benefactor.
At first, Yudkowskyâs ideas didnât get much pickup. Recall that back then the idea of an all-powerful AIâlet alone a dangerous oneâwas pure sci-fi. But in 2014, Nick Bostrom, a philosopher at the University of Oxford, published a book called Superintelligence.
âIt put the AGI thing out there,â says Goertzel. âI mean, Bill Gates, Elon Muskâlots of tech-industry AI peopleâread that book, and whether or not they agreed with his doomer perspective, Nick took Eliezerâs concepts and wrapped them up in a very acceptable way.â
âAll of these things gave AGI a stamp of acceptability,â Goertzel adds. âRather than it being pure crackpot stuff from mavericks howling out in the wilderness.â STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN
Yudkowsky has been banging the same drum for 25 years; many engineers at todayâs top AI companies grew up reading and discussing his views online, especially on LessWrong, a popular hub for the tech industryâs fervent community of rationalists and effective altruists.
Today, those views are more popular than ever, capturing the imagination of a younger generation of doomers like David Krueger, a researcher at the University of Montreal who previously served as research director at the UKâs AI Security Institute. âI think we are definitely on track to build superhuman AI systems that will kill everybody,â Krueger tells me. âAnd I think thatâs horrible and we should stop immediately.â
Yudkowsky gets profiled by the likes of the New York Times, which bills him as âSilicon Valleyâs version of a doomsday preacher.â His new book, If Anyone Builds It, Everyone Dies, written with Nate Soares, president of the Machine Intelligence Research Institute, lays out wild claims, with little evidence, that unless we pull the plug on development, near-future AGI will lead to global Armageddon. The pairâs position is extreme: They argue that an international ban should be enforced at all costs, up to and including the point of nuclear retaliation. After all, âdatacenters can kill more people than nuclear weapons,â Yudkowsky and Soares write.
This stuff is no longer niche. The book is an NYT bestseller and comes with endorsements from national security experts such as Suzanne Spaulding, a former US Department of Homeland Security official, and Fiona Hill, former senior director of the White House National Security Council, who now advises the UK government; celebrity scientists such as Max Tegmark and George Church; and other household names, including Stephen Fry, Mark Ruffalo, and Grimes. Yudkowsky now has a megaphone.
Still, it is those early quiet words in certain ears that may prove most consequential. Yudkowsky is credited with introducing Thiel to DeepMindâs founders, after which Thiel became one of the first big investors in the company. Having merged with Google, it is now the in-house AI lab for the tech colossus Alphabet.
Alongside Musk, Thiel was also instrumental in setting up OpenAI in 2015, sinking millions into a startup founded on the singular ambition to build AGIâand make it safe. In 2023, OpenAI CEO Sam Altman posted on X: âeliezer has IMO done more to accelerate AGI than anyone else. certainly he got many of us interested in AGI.â Yudkowsky might one day deserve the Nobel Peace Prize for that, Altman added. But by this point, Thiel had apparently grown wary of the âAI safety peopleâ and the power they were gaining. âYou donât understand how Eliezer has programmed half the people in your company to believe in that stuff,â he is reported to have told Altman at a dinner party in late 2023. âYou need to take this more seriously.â Altman âtried not to roll his eyes,â according to Wall Street Journal reporter Keach Hagey.
OpenAI is now the most valuable private company in the world, worth half a trillion dollars.
And the transformation is complete: Like all the most powerful conspiracies, AGI has slipped into the mainstream and taken hold.
The great AGI conspiracy
The term âAGIâ may have been popularized less than 20 years ago, but the mythmaking behind it has been there since the start of the computer ageâa cosmic microwave background of chutzpah and marketing.
Alan Turing asked if machines could think only five years after the first electronic computer, ENIAC, was built in 1945. And hereâs Turing a little later, in a 1951 radio broadcast: âIt seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.â
Then, in 1955, the computer scientist John McCarthy and his colleagues applied for US government funding to create what they fatefully chose to call âartificial intelligenceââa canny spin, given that computers at the time were the size of a room and as dumb as a thermostat. Even so, as McCarthy wrote in that funding application: âAn attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.â
Itâs this myth thatâs the root of the AGI conspiracy. A smarter-than-human machine that can do it all is not a technology. Itâs a dream, unmoored from reality. Once you see that, other parallels with conspiracy thinking start to leap out. Itâs impossible to debunk a shape-shifting idea like AGI.
Talking about AGI can sometimes feel like arguing with an enthusiastic Redditor about what drugs (or particles in the sky) are controlling your mind. Each point has a counterpoint that tries to chip away at your own sense of whatâs true. Ultimately, itâs a clash of worldviews, not an exchange of evidence-based reason. AGI is like that, tooâitâs slippery.
Part of the issue is that despite all the money, all the talk, nobody knows how to build it. More than that: Most people donât even agree on what AGI really isâwhich helps explain how people can get away with telling us it can both save the world and end it. At the core of most definitions youâll find the idea of a machine that can match humans on a wide range of cognitive tasks. (And remember, superintelligence is AGIâs shiny new upgrade: a machine that can outmatch us.) But even thatâs easy to pull apart: What humans are we talking about? What kind of cognitive task? And how wide a range?
âThereâs no real definition of it,â says Christopher Symons, chief artificial intelligence scientist at the AI health-care startup Lirio and former head of the computer science and math division at Oak Ridge National Laboratory. âIf you say âhuman-level intelligence,â that could be an infinite number of thingsâeverybodyâs level of intelligence is slightly different.â
And so, says Symons, weâre in this weird race to build ⊠what, exactly? âWhat are you trying to get it to do?â Related Story Artificial general intelligence: Are we close, and does it even make sense to try?
In 2023, a team of researchers at Google DeepMind, including Legg, had a go at categorizing various definitions that people had proposed for AGI. Some said that a machine had to be able to learn; some said that it had to be able to make money; some said that it had to have a body and move about in the world (and maybe make coffee).
Legg told me that when heâd suggested the term to Goertzel for the title of his book, the hand-waviness had been kind of the point. âI didnât have an especially clear definition. I didnât really feel it was necessary,â he said at the time. âI was actually thinking of it more as a field of study, rather than an artifact.â
So, I guess weâll know it when we see it? The problem is that some people think theyâve seen it already.
In 2023, a team of Microsoft researchers put out a paper in which they described their experiences playing around with a prerelease version of OpenAIâs large language model GPT-4. They called it âSparks of Artificial General Intelligenceââand it polarized the industry.
It was a moment when a lot of researchers were blown away and trying to come to terms with what they were seeing. âShit was working better than they had expected it to,â says Goertzel. âThe concept of AGI genuinely started to seem more plausible.â
And yet for all of LLMsâ remarkable wordplay, Goertzel doesnât think that they do in fact contain sparks of AGI. âItâs a little surprising to me that some people with a deep technical understanding of how these tools work under the hood still think that they could become human-level AGI,â he says. âOn the other hand, you canât prove itâs not true.â
And there it is: You canât prove itâs not true. âThe idea that AGI is coming and that itâs right around the corner and that itâs inevitable has licensed a great many departures from reality,â says the University of Edinburghâs Vallor. âBut we really donât have any evidence for it.â
Conspiracy thinking looms again. Predictions about when AGI will arrive are made with the precision of numerologists counting down to the end of days. With no real stakes in the game, deadlines come and go with a shrug. Excuses are made and timelines are adjusted yet again.
We saw this when OpenAI released the much-hyped GPT-5 this summer. AI stans were disappointed that the new version of the companyâs flagship technology wasnât the step change they expected. But instead of seeing that as evidence that AGI wasnât attainableâor attainable with an LLM, at leastâbelievers pushed out their predictions for how soon AGI would come. It was comingâjust, you know, next time.
Maybe theyâre right. Or maybe people will pick whatever evidence they can to defend an idea and overlook evidence that counts against it. Jeremy Cohen, who studies conspiracy thinking in technology circles at McMaster University in Canada, calls this imperfect evidence gatheringâa hallmark of conspiracy thinking.
Cohen started his research career in the Arizona desert, studying a community called People Unlimited that believed its members were immortal. The conviction was impervious to contrary evidence. When its members died of natural causes (including two of its founders), the thinking was that they must have deserved it. âThe general consensus was that every death was a suicide,â says Cohen. âIf you are immortal and you get cancer and you dieâwell, you must have done something wrong.â
Cohen has since been focused on transhumanism (the idea that technology can help humans push past their natural limitations) and AGI. âI am seeing a lot of parallels. There are forms of magical thinking that I think is a part of the popular imagination around AGI,â he says. âIt connects really well to the kinds of religious imaginaries that you see in conspiracy thinking today.â The believers are in on the AGI secret.
Maybe some of you think Iâm an idiot: You donât get it at all lol. But thatâs kind of my point. There are insiders and outsiders. When I talk to researchers or engineers who are happy to drop AGI into the conversation as a given, itâs like they know something I donât. But nobodyâs ever been able to tell me what that something is.
The truth is out there, if you know where to look. Conspiracy theories are primarily concerned about revealing a hidden truth, Cohen tells me: âItâs a really fundamental part of conspiracy thinking, and thatâs absolutely something that you see in the way people talk about AGI,â he says.
Last year, a 23-year-old former OpenAI staffer turned investor, Leopold Aschenbrenner, published a much-dissected 165-page manifesto titled âSituational Awareness.â You donât need to read it to get the idea: You either see the truth of whatâs coming or you donât. And you donât need cold, hard facts, eitherâitâs enough to feel it. Those who donât just havenât seen the light.
This idea stalked the periphery of my conversation with Goertzel, too. When I pushed him on why people are skeptical of AGI, for instance, he said: âBefore every major technical achievement, from human flight to electrical power, loads of wise pundits would tell you why it was never going to happen. The fact is, most people only believe what they see in front of their faces.â
That makes AGI sound like an article of faith. I put that to Krueger, who believes AGIâs arrival is maybe five years out. He scoffed: âI think thatâs completely backwards.â For him, the article of faith is the idea that it wonât happenâitâs the skeptics who continue to deny the obvious. (Even so, he hedges: No one knows for sure, he says, but thereâs no obvious reason that AGI wonât come.)
Hidden truths bring truth seekers, bent on revealing what theyâve been able to see all along. With AGI, though, itâs not enough to uncover something hidden. Here, revelation requires an unprecedented act of creation. If you believe AGI is achievable, then you believe that those making it are midwives to machines that will match or surpass human intelligence. âThe idea of giving birth to machine gods is obviously very flattering to the ego,â says Vallor. âItâs an incredibly seductive thing to think that you yourself are laying the early foundations for that transcendence.â
Itâs yet another overlap with conspiracy thinking. Part of the draw is the desire for a sense of purpose in an otherwise messy world that can feel meaninglessâthe longing to be a person of consequence.
Krueger, who is based in Berkeley, says he knows people working on AI who see the technology as our natural successor. âThey view it as akin to having children or something,â he says. âSide note: they usually donât have children.â AGI will be our one true savior (or itâll bring the apocalypse).
Cohen sees parallels between many modern conspiracy theories and the New Age movement, which reached its peak of influence in the 1970s and â80s. Adherents believed humanity was on the cusp of unlocking an era of spiritual well-being and expanded consciousness that would usher in a more peaceful and prosperous world. In a nutshell, the idea was that by engaging in a set of pseudo-religious practices, including astrology and the careful curation of crystals, humans would transcend their limitations and enter a kind of hippie utopia.
Todayâs tech industry is built on compute, not crystals, but its sense of whatâs at stake is no less transcendent: âYou know, this idea that there is going to be this fundamental shift, thereâs going to be this millenarian turn where we end up in a techno-utopian future,â says Cohen. âAnd the idea that AGI is going to ultimately allow humanity to overcome the problems that face us.â
In many peopleâs telling, AGI will arrive all at once. Incremental advances in AI will stack up until, one day, AI will be good enough to start making better AI by itself. At which pointâFOOMâit will advance so rapidly that AGI will arrive in whatâs often called an intelligence explosion, leading to a point of no return known as the Singularity, a goofy term thatâs been popular in AGI circles for years. Co-opting a concept from physics, the science fiction author Vernor Vinge first introduced the idea of a technological singularity in the 1980s. Vinge imagined an event horizon on the path of technological progress beyond which humans would be fast outstripped by the exponential self-improvement of the machines they had created.
Call it the AI Big Bangâwhich, again, gives us a before and an after, a transcendent moment when humanity as we know it changes forever (for good or bad). âPeople imagine it as an event,â says Grace from AI Impacts.
For Vallor, this belief system is notable for the way that a faith in technology has replaced a faith in humans. Despite the woo-woo, New Age thinking was at least motivated by the idea that people had what it took to change the world by themselves, if they could only tap into it. With the pursuit of AGI, weâve left that self-belief behind and bought into the idea that only technology can save us, she says.
Thatâs a compellingâeven comfortingâthought for many people. âWeâre in an era where other paths to material improvement of human lives and our societies seem to have been exhausted,â Vallor says.
Technology once promised a route to a better future: Progress was a ladder that we would climb toward human and social flourishing. âWeâve passed the peak of that,â says Vallor. âI think the one thing that gives many people hope and a return to that kind of optimism about the future is AGI.â
Push this idea to its conclusion and, again, AGI becomes a kind of godâone that can offer relief from earthly suffering, says Vallor.
Kelly Joyce, a sociologist at the University of North Carolina who studies how cultural, political, and economic beliefs shape the way we think about and use technology, sees all these wild predictions about AGI as something more banal: part of a long-term pattern of overpromising from the tech industry. âWhatâs interesting to me is that we get sucked in every time,â she says. âThere is a deep belief that technology is better than human beings.â
Joyce thinks thatâs why, when the hype kicks in, people are predisposed to believe it. âItâs a religion,â she says. âWe believe in technology. Technology is God. Itâs really hard to push back against it. People donât want to hear it.â How AGI hijacked an industry
The fantasy of computers that can do almost anything a person can is seductive. But like many pervasive conspiracy theories, it has very real consequences. It has distorted the way we think about the stakes behind the current technology boom (and potential bust). It may have even derailed the industry, sucking resources away from more immediate, more practical application of the technology. More than anything else, it gives us a free pass to be lazy. It fools us into thinking we might be able to avoid the actual hard work needed to solve intractable, world-spanning problemsâproblems that will require international cooperation and compromise and expensive aid. Why bother with that when weâll soon have machines to figure it all out for us?
Consider the resources being sunk into this grand project. Just last month, OpenAI and Nvidia announced an up-to-$100 billion partnership that would see the chip giant supply at least 10 gigawatts of ChatGPTâs insatiable demand. Thatâs higher than nuclear power plant numbers. A bolt of lightning might release that much energy. The flux capacitor inside Dr. Emmett Brownâs DeLorean time machine only required 1.2 gigawatts to send Marty back to the future. And then, only two weeks later, OpenAI announced a second partnership with chipmaker AMD for another six gigawatts of power.
Promoting the Nvidia deal on CNBC, Altman, straight-faced, claimed that without this kind of data center buildout, people would have to choose between a cure for cancer and free education. âNo one wants to make that choice,â he said. (Just a few weeks later, he announced that erotic chats would be coming to ChatGPT.)
Add to those costs the loss of investment in more immediate technology that could change lives today and tomorrow and the next day. âTo me itâs a huge missed opportunity,â says Lirioâs Symons, âto put all these resources into solving something nebulous when we already know thereâs real problems that we could solve.â
But thatâs not how the likes of OpenAI needs to operate. âWith people throwing so much money at these companies, they donât have to do that,â Symons says. âIf youâve got hundreds of billions of dollars, you donât have to focus on a practical, solvable project.â
Despite his steadfast belief that AGI is coming, Krueger also thinks the industryâs single-minded pursuit of it means that potential solutions to real problems, such as better health care, are being ignored. âPeople have a long list of complaints about both the concept of AGI and the idea that it should be a goal,â he says. âI think itâs pretty unpopular in the field.â
And there are consequences for the way governments support and regulate technology (or donât). Tina Law, who studies technology policy at the University of CaliforniaâDavis, worries that policymakers are getting lobbied about the ways AI will one day kill us all, instead of addressing real concerns about the ways AI could impact peopleâs lives in immediate and material ways today. Inequality has been sidetracked by existential risk.
âHype is a lucrative strategy for tech firms,â says Law. A big part of that hype is the idea that whatâs happening is inevitable: If we donât build it, someone else will. âWhen something is framed as inevitable,â Law says, âpeople doubt not only whether they should resist but also whether they have the capacity to do so.â Everyone gets locked in.
The AGI distortion field isnât limited to tech policy, says Milton Mueller at the Georgia Institute of Technology, who works on technology policy and regulation. The race to AGI gets compared to the race to the atomic bomb, he says. âSo whoever gets it first is going to have ultimate power over everybody else. Thatâs a crazy and dangerous idea that really will distort our approach to foreign policy.â
Thereâs a business incentive for companies (and governments) to push the myth of AGI, says Mueller, because they can then claim that they will be the first to get there. But because theyâre running a race in which nobody has agreed on the finish line, the myth can be spun as long as itâs useful. Or as long as investors are willing to buy into it.
Itâs not hard to see how this plays out. Itâs not utopia or hellâitâs OpenAI and its peers making a whole lot more money. The great AGI conspiracy, concluded
And maybe that brings us back to the whole conspiracy thingâand a late-game twist in this tale. So far weâve ignored one popular feature of conspiracy thinking: that thereâs a group of powerful figures pulling the levers behind the scenes and that, by seeking the truth, believers can expose this elite cabal.
Sure, the people feeling the AGI arenât publicly accusing any Illuminati or WEF-like force of preventing the AGI future or withholding its secrets.
But what if there are, in fact, shadowy puppet masters hereâand theyâre the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody elseâs.
As one senior executive at an AI company said to us recently, AGI always needs to be six months to a year away, because if itâs any further than that, you wonât be able to recruit people from Jane Street, and if itâs closer to already here, then whatâs the point?
As Vallor puts it: âIf OpenAI says theyâre building a machine thatâs going to make corporations even more powerful than they are today, that isnât going to get the kind of public buy-in that they need.â
Remember: You create a god and you become like one yourself. Krueger says thereâs a line of thinking running through Silicon Valley in which building AI is a way to seize huge amounts of power. (Itâs one of the premises of Aschenbrennerâs âSituational Awareness,â for example.) âYou know, weâre going to have this godlike power and weâre going to have to figure out what to do with it,â says Krueger. âA lot of people think if they get there first, they can basically take over the world.â
âTheyâre putting so much effort into selling their vision of a future with AGI in it, and theyâre having a pretty good amount of success because they have so much power,â he adds.
Goertzel, for one, is almost lamenting how successful the maybe-cabal has been. Heâs actually starting to miss life on the fringes. âIn my generation, you had to have a lot of vision to want to work on AGI, and you had to be very stubborn,â he says. âNow itâs almost, like, what your grandma tells you to do to get a job instead of being a business major.â
âItâs disorienting that this stuff is so broadly accepted,â he says. âIt almost gives me the desire to go work on something else that not so many people are doing.â Heâs half joking (I think): âObviously, putting the finishing touches to AGI is more important than gratifying my preference to be out on the frontier.â
But Iâm no clearer on what exactly theyâre putting the finishing touches on. What does it mean for technology in general if we fall so hard for the fairy tales? In a lot of ways, I think the whole idea of AGI is built on a warped view of what we should expect technology to do, and even what intelligence is in the first place. Stripped back to its essentials, the argument for AGI rests on the premise that one technology, AI, has gotten very good, very fast, and will continue to get better. But set aside the technical objectionsâwhat if it doesnât continue to get better?âand youâre left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And itâs not.
Intelligence doesnât come as a quantity you can just ratchet up and up. Smart people may be brilliant in one area and not in others. Some Nobel Prize winners are really bad at playing the piano or caring for their kids. Some very smart people insist that AGI is coming next year.
Itâs hard not to wonder what will get its hooks into us next.
Before we ended our call, Goertzel told me about an event heâd just been to in San Francisco on AI consciousness and parapsychology: âESP, precognition, and whatnot.â
âThatâs where AGI was 20 years ago,â he said. âEveryone thinks itâs batshit crazy.â
Please remember that AI is real. Intelligence is real - and famously reproducible by unskilled labor. Faking it got weirdly close in a few short years. Music, essays, and code are no longer the sole domain of human beings.
Figuring out how to think is an engineering problem. At some point, computers will do it.
Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of âFeel the AGI!â at team meetings.
⊠okay thatâs a cult. Tribalists will latch onto any concept and repeat the same instinctive patterns of behavior. Look at what the Soviets did to Marxism, or what MBAs did to Agile. They donât mean things when they say words. Theyâre just shuffling cards.
We cannot mistake those obvious delusions for the general concept. Transhumanism suffers potheads talking about meditating themselves onto a higher plane, when itâs rooted in, like, prosthetic limbs. People with carbon-fiber legs have raced in the Olympics, and we gotta talk about whether thatâs fair, or whether itâs ethical to exclude them. The far end of that discussion remains theoretical, but itâs not mystical.
Nor should naysayers have carte blanche to declare how intelligence does or doesnât work. You donât know, either. We have little reason to believe weâre the smartest thing possible, besides ego. But letâs entertain the idea: individual minds cannot beat a particularly clever human. Projects that would take a thousand experts working as a team still do. So - what if that came on a chip? What if a datacenter managed a decade of study, every afternoon? A computer doesnât have to be smarter than you, to be faster than you, or more than you. This basically happened to math. Calculator used to be a job. Electronics and spreadsheets have utterly outclassed any human capacity for managing figures. Youâre reading this on some gizmo that does a million operations a second just to sit idle. Itâll do billions, when asked. You cannot compete.
This doesnât justify every nightmare from science fiction. Institutions with a thousand experts already exist, and have done decades of study at the usual pace. Even the ones led by complete bastards havenât managed to end civilization. Exxonâs marketing department has come closer than any third-world nuclear program. Empowering such efforts by orders of magnitude could be really fucking bad, but Yudkowsky has lost the plot, talking about dropping nukes to stop computers from thinkinâ real hard.
A smarter-than-human machine that can do it all is not a technology. Itâs a dream, unmoored from reality.
Aaand the author loses the plot in the other direction. Weak AI, the kind you can seriously argue does not think at all, is already causing real-world problems. Cybersecurityâs been frustrated because LLMs are as smart as a script kiddie. Humans have improved LLMs just by rearranging them internally. Connecting these dots is so easy, itâs a joke. Sharp rock makes sharper rock. The part where we cut the world in half is science fiction. The part where it does the thing is merely speculative.
To imagine we cannot possibly build a mind, or that it cannot possibly improve that same effort, is baffling. It changes the shape of the universe.
Agonizing over precise definitions is absurd. You know what it means for a person to be smart or stupid. A program that always instantly sounds as smart as you, if you had a weekâs preparation between questions and a team to bounce ideas on, is smarter than you. An LLM is not that. Draw your line in the sand somewhere between those points.
Crossing that line would make new things possible, in a way that local solutions cannot. Itâs the industrial revolution absorbing thought the same way it absorbed muscles and memory. To put it lightly, that has emergent properties. Visions of megalomania thanks to mechanized farming would have been stupid, but that invention was kind of important, even though the same advancements could have just made sharper scythes.
We will no doubt have some kind of AGI at some point, if we donât extinct ourselves in the meantime. Itâs not going to happen anytime soon, probably not within our lifetimes. And itâs not going to come from an LLM (Language â Intelligence). My best bet is that one computer scientist will pioneer an actual piece of technology, in the same way neural networks were invented. The person who does it will probably be highly skilled in neuroscience and math, which I doubt anyone at OpenAI is. But again, if it ever could happen, it wouldnât be now. Our computers and algorithms have largely remained the same for the past 20 years, theyâve just gotten cleverer and faster. I think the idea of a computer reasoning is cool, but I havenât seen any of that from anyone. Just neat tricks.
Our computers and algorithms have largely remained the same for the past 20 years, theyâve just gotten cleverer and faster.
Aside from neural networks becoming pragmatic overnight, and offering entirely new forms of software that let you describe video into existence.
Weâve spent a trillion dollars asking two questions: âwhatâs the next word?â and âwhich pixels are noise?â The result isnât AGI and wonât become AGI, but uh, there are more than two questions we could ask.
As training gets more efficient, experimentation with neural networks as a universal approximator should explode. All the dorks doing LORAs could build a thing from scratch. We have no goddamn idea where that might peak. The ability to turn examples of something into more of that thing, or to turn bad examples into good examples, sounds like science fiction written by a novice. But itâs real. Itâs how this flood actually works. Diffusion genuinely removes the parts of an image that donât look like a statue.
Half the current advancements in AI are that kind of stupid hack. Hinton himself was like âthis will never work, but itâll take five minutes, why not try oh shit.â We cannot rule out some fuckinâ guy next year using whatâs existed since yesterday to derive the entire universe from a piece of fairy cake. Or at least to derive a leaning algorithm from grade school quizzes.
Yeah thatâs the cleverer and faster bit. Generative AI has been a thing for at least 15 years (was in the AI field at the time and saw it presented at conferences). I would argue that attention structure in modelling was the new technology I guess. But most of the breakthrough is really just the money to scale.
Anyway pattern recognition is amazing and powerful and we can get excited about that without getting caught up in a superintelligence cult.
While thatâs fair perspective, it does seem like shrugging about microprocessors, because weâd already had transistors. âCleverer and fasterâ is deeply underselling the qualitative difference between literal autocomplete and the computer writing your code for you. In ancient times, someone mentioned trying to learn programming by opening Notepad, writing âshow a bouncing blue circle on a black screen,â and saving that as demo.exe. The fact thatâs now feasible is a fundamental shift.
We canât ask the program to improve its own code because it was not coded. That too is a fundamental shift. Neural networks are anything but new, except now they, yâknow⊠work. Decades of amusingly limited backwards image labelers suddenly gave way to real-time photorealistic CGI-for-dummies. Dramatic improvements arise from anarchic communities of horny randos. Theyâre just. Theyâre so horny. And instead of scouring cutting-edge whitepapers for missed opportunities, all they need are labeled examples. Theyâre already experts thanks to booru sites, because, and I cannot stress this enough: horny.
Drastic advances have emergent properties. Consciousness itself is one such example. Neurons have surely been similar for eons, but you put enough of them in one place, and some upright meatsack has opinions about The Godfather. We happened accidentally, and weâre still dumb enough we donât know how we work. I think we can do better. Eventually.
Yeah I think if you peek behind the curtain a bit on the models it become harder to ignore the rampant hyperbole. Like really high dimensional transformers are a cool and powerful thing that didnât exist 5 or so years ago, but itâs not magic and does have limitations. I guess the next 5 years will be a good test of whether weâre focused on the right things and allocating effort for the betterment of everyone when it comes to building and using models. I tend to think not but I hope to be proven wrong (and have no power to stop the hype train anyway).
I have views on vibe coding in particular since you mention it, but hey I do use an LLM a tonne while writing stuff myself so I guess thereâs really only a fairly narrow margin separating me from the boosters.
Oh yeah, LLMs are dumb as hell. But theyâre enough like intelligence that we could measure their IQ. They demonstrate some forms of spatial reasoning. They can follow instructions for things they werenât trained on. Yes, they will slip into creative writing rather than say âI donât know,â but even their bullshit goes deeper than grammatically-correct gibberish. Theyâll tell you why glue goes on pizza.
Disagree about the booster comment. We shouldnât let the self-professed haters reduce everything to absolutes. Especially when a lot of that is clearly performative ingroup loyalty. Some folks just wanna be in a club. So: slop is not âall generated contentâ any more than spam just means âe-mail.â Boosters are those deluded or invested enough to still believe Sam Altman about anything. You and I are just dorks appreciating technology for what it demonstrably does.
These are brand new tools, and where they are useful, they should be used.
Great writeup. Not sure if âconspiracy theoryâ is the right word, but i for sure see the magical thinking and the power dynamics described.


