• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.

    And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.

    • silasmariner
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      Yeah see I don’t agree with that base premise, that it’s as simple as information processing. I think sentience - and, therefore, intelligence - is a more holistic process that requires many more tightly-coupled external feedback loops and an embedding of the processes in a way that makes the processing analogous to the world as modelled. But who can say, eh?

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        It’s not obvious to me that sentience has to come along for the ride. It’s perfectly conceivable that there’s nothing it’s like to be a superintelligent AGI system. What I’ve been talking about this whole time is intelligence — not sentience, or what I’d call consciousness.

    • silasmariner
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      I would like to have responded inline, but my app appears to have barfed on the nesting depth, and I can only see the next two responses here on my profile. The other premise I have that you appear to disagree with is that ‘sentience’ or ‘consciousness’ is required for intelligence. Very Peter Watts to separate the two, and I don’t mean that as a dig because he’s a great read but frankly I don’t buy it – I think the implications are inherently unstable, indurable systems. Anyway, been nice chatting. I hope you have a lovely week.