Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • petrol_sniff_king@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 months ago

    Hey! Just asking you because I’m not sure where else to direct this energy at the moment.

    I spent a while trying to understand the argument this paper was making, and for the most part I think I’ve got it. But there’s a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:

    If producing an AGI is intractable, why does the human meat-brain exist?

    Evolution “may be thought of” as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the “AI” it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).

    The question is, where does this line of thinking fail?

    Going by the proof, it should either be:

    • That evolution is an intractable method. 60 million years is a long time, but it still feels quite short for this answer.
    • Something about it doesn’t fit within this computational paradigm. That is, I’m stretching the definition.
    • The language “no better than chance” for option 2 is actually more significant than I’m thinking. Evolution is all chance. But is our existence really just extreme luck? I know that it is, but this answer is really unsatisfying.

    I’m not sure how to formalize any of this, though.

    The thought that we could “encode all of biological evolution into a program of at most size K” did made me laugh.

    • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 months ago

      If producing an AGI is intractable, why does the human meat-brain exist?

      Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

      The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.

      There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.

      And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.

      And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

        I didn’t quite understand this at first. I think I was going to say something about the paper leaving the method ambiguous, thus implicating all methods yet unknown, etc, whatever. But yeah, this divide between solvable and “unsolvable” shifts if we ever break NP-hard and have to define some new NP-super-hard category. This does feel like the piece I was missing. Or a piece, anyway.

        e.g. humans don’t fit the definition either.

        I did think about this, and the only reason I reject it is that “human-like or -level” matches our complexity by definition, and we already have a behavior set for a fairly large n. This doesn’t have to mean that we aren’t still below some curve, of course, but I do struggle to imagine how our own complexity wouldn’t still be too large to solve, AGI or not.


        Anyway, the main reason I’m replying again at all is just to make sure I thanked you for getting back to me, haha. This was definitely helpful.

    • BitSound@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      3 months ago

      That’s a great line of thought. Take an algorithm of “simulate a human brain”. Obviously that would break the paper’s argument, so you’d have to find why it doesn’t apply here to take the paper’s claims at face value.