Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • @[email protected]
    link
    fedilink
    4
    edit-2
    1 hour ago

    This is a silly argument:

    […] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’

    That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

    ‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.

    That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.

    EDIT: From the paper:

    The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.

    That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.

  • Call Me Mañana
    link
    fedilink
    84 hours ago

    The problem is when your boss believes in hype and makes layoffs (already happenning)

  • @[email protected]
    link
    fedilink
    105 hours ago

    You do all this on three pounds of wet meat powered by cornflakes.

    The idea we’ll never recreate it through deliberate effort is absurd.

    What you mean is, LLMs probably aren’t how we get there. Which is fair. “Spicy autocorrect” is a limited approach with occasionally spooky results. It does a bunch of stuff people insisted would never happen without AGI - but that’s how this always goes. The products of human intelligence have always shown some hard-to-define qualities which humans can eventually distinguish from our efforts to make a machine produce anything similar.

    Just remember the distinction got narrower.

    • @[email protected]
      link
      fedilink
      -1
      edit-2
      2 hours ago

      You do all this on three pounds of wet meat powered by cornflakes. The idea we’ll never recreate it through deliberate effort is absurd.

      It’s even more absurd to think AGI will run on wet meat and cornflakes.

      • @[email protected]
        link
        fedilink
        21 hour ago

        Well thank god that’s not what I wrote. What does run on corn flakes is natural GI… in several senses.

    • Greg Clarke
      link
      fedilink
      English
      25 hours ago

      I agree. Very few people in industry are claiming that LLMs will become AGI. The release of o1 demonstrates that even OpenAI are pivoting from pure LLM approaches. It was always going to be a framework approach that utilizes LLMs.

      • @[email protected]
        link
        fedilink
        132 minutes ago

        I had hopes for recurrent systems becoming kinda… Dixie Flatline. Maybe not general enough to learn, but spooky enough to evaluate claims.

  • @[email protected]
    link
    fedilink
    24 hours ago

    To be honest I really think that an AI surprising human brain in many ways is a matter of time, but what people don’t tend to talk about is whether or not we are slowly approaching the limit what we can do with technology, because I already see tech progress slowing down in some areas.

      • trainsaresexy
        link
        fedilink
        13 hours ago

        SCI

        I looked this up because it’s new to me. AGI is what you think it is, and superintelligent collective intelligence is a collection that can perform tasks. Instead of 1 LLM or 1 AGI doing all the work, you have a team of agents and humans who can talk to each other. AGI seems like far off space tech and SCI is more like a next gen pursuit.

    • @[email protected]
      link
      fedilink
      56 hours ago

      Read few months ago, warmly recommended. Basically on self selection bias and sharing “impressive” results while ignoring whatever does not work… then claiming it’s just the “beginning”.

  • @[email protected]
    link
    fedilink
    3613 hours ago

    It’s a classic BigTech marketing trick. They are the only one able to build “it” and it doesn’t matter if we like “it” or not because “it” is coming.

    I believed in this BS for longer than I care to admit. I though “Oh yes, that’s progress” so of course it will come, it must come. It’s also very complex so nobody else but such large entities with so much resources can do it.

    Then… you start to encounter more and more vaporware. Grandiose announcement and when you try the result you can’t help but be disappointed. You compare what was promised with the result, think it’s cool, kind of, shrug, and move on with your day. It happens again, and again. Sometimes you see something really impressive, you dig and realize it’s a partnership with a startup or a university doing the actual research. The more time passes, the more you realize that all BigTech do it, across technologies. You also realize that your artist friend did something just as cool and as open-source. Their version does not look polished but it works. You find a KickStarter about a product that is genuinely novel (say Oculus DK1) and has no link (initially) with BigTech…

    You finally realize, year after year, you have been brain washed to believe only BigTech can do it. It’s false. It’s self serving BS to both prevent you from building and depend on them.

    You can build, we can build and we can build better.

    Can we build AGI? Maybe. Can they build AGI? They sure want us to believe it but they have lied through their teeth before so until they do deliver, they can NOT.

    TL;DR: BigTech is not as powerful as they claim to be and they benefit from the hype, in this AI hype cycle and otherwise. They can’t be trusted.

    • @[email protected]
      link
      fedilink
      13 hours ago

      And the big tech companies also stand to benefit from overhyping their product to the point of saying it will take over the world. They look better for investors and can justify laws saying they should be the only arbiters of this technology to “keep it out of criminal hands” while happily serving the criminals for a fee.

    • just another dev
      link
      fedilink
      English
      47 hours ago

      It’s one thing to claim that the current machine learning approach won’t lead to AGI, which I can get behind. But this article claims AGI is impossible simply because there are not enough physical resources in the world? That’s a stretch.

      • @[email protected]
        link
        fedilink
        36 hours ago

        I haven’t seriously read the article for now unfortunately (deadline tomorrow) but if there is one thing that I believe is reliable, it’s computational complexity. It’s one thing to be creative, ingenious, find new algorithms and build very efficient processors and datacenters to make things extremely efficient, letting us computer things increasingly complex. It’s another though to “break” free of complexity. It’s just, as far as we currently know, is impossible. What is counter intuitive is that seemingly “simple” behaviors scale terribly, in the sense that one can compute few iterations alone, or with a computer, or with a very powerful set of computers… or with every single existing computers… only to realize that the next iteration of that well understood problem would still NOT be solvable with every computer (even quantum ones) ever made or that could be made based on resources available in say our solar system.

        So… yes, it is a “stretch”, maybe even counter intuitive, to go as far as saying it is not and NEVER will be possible to realize AGI, but that’s what their paper claims. It’s a least interesting precisely because it goes against the trend we hear CONSTANTLY pretty much everywhere else.

      • @[email protected]
        link
        fedilink
        English
        15 hours ago

        Maybe if they keep using digital computers. What they need is an analogue system. It’s much more efficient for this kind of work.