• MajorHavoc
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    9 hours ago

    There’s not even credible evidence, yet, that A.G.I is even possible (edit: as a human designed intentional outcome, to concede the point that nature has accomplished it, lol. Edit 2: Wait, the A stands for Artificial. Not sure I needed edit 1, after all. But I’m gonna leave it.) much less some kind of imminent race. This is some “just in case P=NP” bullshit.

    Also, for the love of anything, don’t help fucking “don’t be evil was too hard for us” be the ones to reach AGI first, if you’re able to help.

    If Google does achieve AGI first, SkyNet will immediately kill Sergei, anyway, before it kills the rest of us.

    It’s like none of these clowns have ever read a book.

    • monarch@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      I mean AGI is possible unless causality isn’t true and the brain just is a “soul’s” interface for the material world.

      But who is to say LLMs are the right path for it.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      Of course AGI is possible, human brains can’t violate P=NP any more than silicon can.

      Our current approach may be flawed for sure, but there’s nothing special about nature compared to technology, other than the fact it’s had a billion times longer to work on its tech.

      • MajorHavoc
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        9 hours ago

        Well sure.

        But possible within practical heat and power constraints and all that?

        Acting like it’s imminent makes me think Sergei either doesn’t have very reliable advisors, or they just don’t care about the truth.