Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • @[email protected]
    link
    fedilink
    117 hours ago

    You do all this on three pounds of wet meat powered by cornflakes.

    The idea we’ll never recreate it through deliberate effort is absurd.

    What you mean is, LLMs probably aren’t how we get there. Which is fair. “Spicy autocorrect” is a limited approach with occasionally spooky results. It does a bunch of stuff people insisted would never happen without AGI - but that’s how this always goes. The products of human intelligence have always shown some hard-to-define qualities which humans can eventually distinguish from our efforts to make a machine produce anything similar.

    Just remember the distinction got narrower.

    • @[email protected]
      link
      fedilink
      -2
      edit-2
      4 hours ago

      You do all this on three pounds of wet meat powered by cornflakes. The idea we’ll never recreate it through deliberate effort is absurd.

      It’s even more absurd to think AGI will run on wet meat and cornflakes.

      • @[email protected]
        link
        fedilink
        23 hours ago

        Well thank god that’s not what I wrote. What does run on corn flakes is natural GI… in several senses.

    • Greg Clarke
      link
      fedilink
      English
      26 hours ago

      I agree. Very few people in industry are claiming that LLMs will become AGI. The release of o1 demonstrates that even OpenAI are pivoting from pure LLM approaches. It was always going to be a framework approach that utilizes LLMs.

      • @[email protected]
        link
        fedilink
        12 hours ago

        I had hopes for recurrent systems becoming kinda… Dixie Flatline. Maybe not general enough to learn, but spooky enough to evaluate claims.