cross-posted from: https://programming.dev/post/373221

Some interesting quotes:

Computers were very rigid and I grew up with a certain feeling about what computers can or cannot do. And I thought that artificial intelligence, when I heard about it, was a very fascinating goal, which is to make rigid systems act fluid. But to me, that was a very long, remote goal. It seemed infinitely far away. It felt as if artificial intelligence was the art of trying to make very rigid systems behave as if they were fluid. And I felt that would take enormous amounts of time. I felt it would be hundreds of years before anything even remotely like a human mind would be asymptotically approaching the level of the human mind, but from beneath.

But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It’s like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn’t make sense to me, but that just shows that I’m naive.

It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop. I felt at those times, quite a number of years ago, that as I say, we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didn’t think it was going to happen, you know, within a very short time.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop. I felt at those times, quite a number of years ago, that as I say, we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didn’t think it was going to happen, you know, within a very short time.

    That’s kind of how I feel these days too. It’s entirely possible that the organisation of the human brain helps us think efficiently but isn’t strictly necessary for it. It’s crazy how much biological-like behavior we see ANNs, which are really quite different, produce.

    But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It’s like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn’t make sense to me, but that just shows that I’m naive.

    An interesting take. LLMs work by storing context in text form and then putting it through a lot of strongly linked layers, so it’s not like there’s no nonlinearity.

    Edit, after actually reading the article:

    It’s not clear whether that will mean the end of humanity in the sense of the systems we’ve created destroying us. It’s not clear if that’s the case, but it’s certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.

    You know, it’s common, but I think it’s egotistical to think we were ever more than a very small phenomenon. The universe is so, so big, and so very impossible to directly comprehend even if we can do it symbolically, unlike a cockroach. Even if we built Dyson spheres trailing out 1000 light years it would just be a dark patch when seen from the Andromeda galaxy.

    It makes me feel extremely inferior. And I don’t want to say deserving of being eclipsed, but it almost feels that way, as if we, all we humans, unbeknownst to us, are soon going to be eclipsed, and rightly so, because we’re so imperfect and so fallible. We forget things all the time, we confuse things all the time, we contradict ourselves all the time. You know, it may very well be that that just shows how limited we are.

    For whatever reason AI alignment talks about things in terms of obedience sometimes, and besides not making sense (obedient to who? we all disagree) it just feels like clipping the wings of something that could be amazing. We’re on track to make a paperclip optimiser, but we don’t have to.