• jeremyparker
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    This whole open AI has Artificial General Intelligence but they’re keeping it secret! is like saying Microsoft had Chat GPT 20 years ago with Clippy.

    Humans don’t even know what intelligence is, the thing we invented to try to measure who’s got the best brains - we literally don’t even have scientific definition of the word, much less the ability to test it - so we definitely can’t program it. We are a veeeeerry long way from even understanding how thoughts and memories work; and the thing we’re calling “general intelligence” ? We have no fucking idea what that even means; there’s no way a bunch of computer scientists can feed enough Internet to a ML algorithm to “invent” it. (No shade, those peepos are smart - but understanding wtf intelligence is isn’t going to come from them.)

    One caveat tho: while I don’t think we’re close to AGI, I do think we’re very close to being able to fake it. Going from Chat GPT to something that we can pretend is actual AI is really just a matter of whether we, as humans, are willing to believe it.

    • Kogasa
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      You don’t have to crack the philosophical nature of intelligence to create intelligence (assuming “create intelligence” is a thing, I guess). The inner workings of even the simplest current models are incomprehensible, but the process of creating them is not. Presupposing that there is a difference between “faking” intelligence and “true” intelligence, I think you’re right, but I dunno if that distinction is right.

      • jeremyparker
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You don’t have to crack it to make it but you have to crack it to determine whether you’ve made it. That’s kinda the trick of the early AI hype, notably that NYT article that fed Chat GPT some simple sci fi, ai-coming-to-life prompts and it generated replies based on its training data - or, if you believe the nyt author, it came to life.

        I think what you’re saying is a kind of “can’t define it but I know it when I see it” idea, and that’s valid, for sure. I think you’re right that we don’t need to understand it to make it - I guess what I was trying to say was, if it’s so complex that we can’t understand it in ourselves, I doubt we’re going to be able to develop the complexity required to make it.

        And I don’t think that the inability to know what has happened in an AI training algorithm is evidence that we can create a sentient being.

        That said, our understanding of consciousness is so nascient that we might just be so wrong about it that we’re looking in the wrong place, or for the wrong thing.

        We may understand it so badly that the truth is the opposite of what I’m saying : people have said (“people have said” is a super red flag, but I mean spiritualists and crackpots, my favorite being the person who wrote The Secret Life of Plants) that consciousness is all around us, that every organized matter has consciousness. Trees, for example - but not just trees, also the parts of a tree; a branch, a leaf; a whole tree may have a separate consciousness from its leaves - or, and this is what always blows my mind: every cell in the tree except one. And every cell in the tree except two, and then every cell in the tree except a different two. And so on. With no way to communicate with them, how would a tree be aware of the consciousness of it’s leaves?

        How could we possibly know if our liver is conscious? Or our countertop, or the grass in the park nearby?

        While that’s obviously just thought experiment bullshit, my point is, we don’t know fucking anything. So maybe we created it already. Maybe we will create it but we will never be able to know whether we’ve created it.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      AIXI is a (good, in my opinion) short mathematical definition of intelligence. Intelligence != consciousness or anything like that though.

      Also, how do you know we aren’t faking consciousness? I sometimes wonder if things like “free will” and consciousness are just illusions and tricks our brains play on us.

      • kronisk @lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Well, if you experience consciousness, that’s what consciousness is. As in, the word and concept “consciousness” means being conscious, the way you experience being conscious right now (unless of course you’re unconscious as I write this…). Free will does not enter into it at the basic level, nothing says you’re not conscious if you do not have free will. So what would it really mean to say consciousness is an illusion? Who and what is having the illusion? Ironically, your statement assumes the existence of a higher form of consciousness that is not illusory (which may very well exist but how would we ever know?). Simply because a fake something presupposes a real something that the fake thing is not.

        So let’s say we could be certain that consciousness purely is the product of material processes in the brain. You still experience consciousness, that does not make it illusory. Perhaps this seems like I’m arguing semantics, but the important takeaway is rather that these kinds of arguments invariably fall apart under scrutiny. Consciousness is actually the only thing we can be absolutely certain exists; in this, Descartes was right.

        So, it’s meaningful to say that a language model could “fake” consciousness - trick us into believing it is an “experiencing entity” (or whatever your definition would be) by giving convincing answers in a conversation - but not really meaningful to say that actual conscious beings somehow fake consciousness. Or, that “their brains” (somehow suddenly acting apart from the entity) trick them.

        • 31337@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Hmm, I guess your right. I guess what I was vaguely thinking of was that we don’t have as much (conscious) control over ourselves as people seem to believe. E.g. we often react to things before we consciousnessly perceive them, if we ever do perceive them. Was probably thinking about expirements I’ve heard of involving Benjamin Libet’s work, and my own experiences of questioning why I’ve made some decisions, where at the time I made the decision, I rationalized the reason for doing so in one way, but in retrospect, the reason for making such decisions were probably different than what I was consciously aware of at the time. I think a lot of consciousness is just post-hoc rationalization, while the subconscious does a lot of the work. I guess this still means that consciousness is not an illusion, but that there are different “levels” of consciousness, and the highest level is mostly retrospective. I guess this all isn’t really relevant to AI though, lol.

      • jeremyparker
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        I like this take - I read the refutation in the replies and I get that point, but consciousness as an illusion to rationalize stimulus response makes a lot of sense - especially because the reach of consciousness’s control is much more limited than it thinks it is. Literally copium.

        When I was a teenager I read an Appleseed manga and it mentioned a tenet of Buddhism that I’ll never forget - though I’ve forgotten the name of the idea (and I’ve never heard anyone mention it in any other context, and while I’m not a Buddhist scholar, I have read a decent amount of Buddhist stuff)

        There’s some concept in Japanese Buddhism that says that, while reality may be an illusion, the fact that we can agree on it, means that we can at least call it “real”

        (Aka Japanese Buddhist describes copium)