• jadero
    link
    fedilink
    arrow-up
    22
    arrow-down
    5
    ·
    edit-2
    1 year ago

    There are a few things I’ve taken from that article on first reading:

    1. I was substantially correct in my understanding of how multidimensional matrices and neural networks are used. While unsurprising given the amount of reading I’ve done over the last several decades on various approaches to AI, it’s still gratifying to feel that I actually learned something from all that reading.
    2. I saw nothing in there to argue against my thesis that things like ChatGPT may be doing for intelligence what evolutionary biology has done to creationism. In the case of evolution, it has forced creationists to fall back on a “God of the Gaps” whose gaps grow ever smaller. ChatGPT et al have me thinking that any attribution of mind or intelligence to “mystery” or the supernatural or whatever hand waving is en vogue is or will be consigned to ever smaller gaps. That is, it is incorrect to claim that intelligence, human or otherwise, is currently and will forever remain unexplainable.
    3. The fact that we cannot easily work out exactly how a particular input was transformed to a particular output strikes me as a “fake problem.” That is, given the scale of operations, this difficulty of following a single throughline is no different from many other processes we have developed. Who can say which molecules go where in an oil refinery? We have only a process that is shown useful in the lab then scaled to beyond comprehension in industry. Except that it’s not actually beyond comprehension, because everything we need to know is described by the process, validated at small scales, and producing statistically similar useful results at large scales. Asking questions about individual molecules is asking the wrong questions. So it is with LLM and transformers: the “how it works” is in being able to describe and validate the process, not in being able to track and understand individual changes between input and output at scale.
    4. Although not explicitly addressed, the “hallucinatory” results we occasionally see may have more in common with the ordinary cognitive failures we are all subject to than anything that can be labelled as broken. Each of us has in our backgrounds something that got misclassified in ways that, when combined with the way we process information, lead to wild conclusions. That is why we have learned to compare and contrast our results with the results of others and have even formalized that activity in science. So it may be necessary to apply that activity (compare and contrast) with other systems, including the ones built in to our brains.

    Anyway some pseudorandom babbling that I hope is at least as useful as a hallucinating AI.

    • Sigmatics@lemmy.ca
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      1 year ago

      I disagree that 3 is not a problem.

      As opposed to industrial processes that you compared it to, we cannot predict the output of a LLM with any kind of certainty. This can and will be problematic, as our economy is built around predictable processes.

      • jadero
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        1 year ago

        That is true, but perhaps inappropriate in this case. Humans are not predictable, nor is weather, the actual outcomes of policy decisions, and any number of things that are critical to a functioning society. We mostly cope with most issues by creating systems that are somewhat resilient, take into account the lack of perfection, and by making adjustments over time to tweak the results.

        I think perhaps a better analogy than the oil refinery might be economic or social policy. We have to always be fiddling with inputs and processes to get the results we desire. We never have perfectly predictable outcomes, yet somehow mostly manage to get things approximately correct. This doesn’t even ignore the issue that we can’t seem to really agree on what “correct” is as we seem to be in general agreement that 1920 was better than 1820 and that 2020 was better than 1920.

        If we want AI to be the backbone of industry, then the current state of the art probably isn’t suitable and the LLM/transformer systems may never be. But if we want other ways to browse a problem space for potential solutions, then maybe they fit the bill.

        I don’t know and I suspect we’re still a decade away from really being able to tell whether these things are net positive or not. Just one more thing that we have difficulty predicting, so we have to be sure to hedge our bets.

        (And I apologize if it seems I’ve just moved the goal posts. I probably did, but I’m not really sure that I or anyone else really knows enough at this point to really lock them in place.)

        • Juno@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Maybe thinking about it in terms of a simple video game that’s complex enough to have floating point math involved. The significand would be like the skeletons of the sentences with article words (the, a, an) and the sentence structure as a base.

          There’s a good pac man analogy in there some where…

          • jadero
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I don’t really follow you. I’m not able to make the leap from the methods of floating point math to construction of sentences. There is a sense in which I understand what you’ve written and another sense in which I feel like there was one more step on the staircase than I realized :)

            • Juno@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              It’s like a blank space needs filled

              The static point would be the sentence “Theres a ____ in the house” And from there it’s like a coin sorting machine filter filter filter okay noun filter filter filter cat the user doesn’t want a cat filter filter filter dog

              Where the filtering = other similar static points or it’s looking for other sentences arranged like that with those words in that context.

              That’s how it mistakes cat for dog It’s not thinking “I know what a cat is, dogs are like that” It’s just looking for word usage frequency in that specific or similar contexts and replacing it with a frequently used word. That’s how you end up getting a wrong answer “what’s more like a cat? Dog or kitten? Reply:Dog.”

              Or if it screws up some math it’s to do with it not actually doing any math, instead it’s looking for answer frequency and enough people wrote 2+2=5

              • jadero
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Okay, now I get it. That is pretty close to how I imagine it, too. That is part of why I think these LLMs may give insight into cognition more generally.

                I had never thought of that while reading books and articles that describe and investigate the errors we make, especially when there is some kind of brain damage. But I feel like I’ve seen all these errors described in humans by Oliver Sacks et al.

                • Juno@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  I’m interested in this primarily as an English teacher. I need to be able to spot the linguistic tics and errors and recognize where it likely came from.

                  Right now, the best we have is like the opening scenes from Bladerunner.

                  Holden: One-one-eight-seven at Unterwasser. Leon: That’s the hotel. Holden: What? Leon: Where I live. Holden: Nice place? Leon: Yeah, sure I guess-- that part of the test? Holden: No, just warming you up, that’s all. Leon: Oh. It’s not fancy or anything. Holden: You’re in a desert, walking along in the sand when all of the sudden- Leon: Is this the test now? Holden: Yes. You’re in a desert walking along in the sand when all of the sudden you look down- Leon: What one? Holden: What? Leon: What desert? Holden: It doesn’t make any difference what desert, it’s completely hypothetical. Leon: But how come I’d be there? Holden: Maybe you’re fed up, maybe you want to be by yourself, who knows? You look down and you see a tortoise, Leon, it’s crawling towards you- Leon: Tortoise, what’s that? Holden: Know what a turtle is? Leon: Of course. Holden: Same thing. Leon: I’ve never seen a turtle – But I understand what you mean. Holden: You reach down, you flip the tortoise over on its back Leon. Leon: Do you make up these questions, Mr. Holden, or do they write them down for you? Holden: The tortoise lays on its back, its belly baking in the hot sun beating its legs trying to turn itself over but it can’t, not without your help, but you’re not helping. Leon: What do you mean I’m not helping? Holden: I mean, you’re not helping. Why is that Leon? – They’re just questions, Leon. In answer to your query, they’re written down for me. It’s a test, designed to provoke an emotional response. – Shall we continue?

                  Except I can’t ask the paper on Maya Angelou any questions. Short of interrogating each student when they turn something in, it’s been a real struggle in the last few months to spot work that was not actually done by my students but was instead written by chat gpt.

                  How to proceed now that they all interact with TikTok’s chatbot, where not just the tech savvy kids will try this, idk.

                  But my first super fake was a well written paper about the personal growth of a girl named Fredericka who described feeling triumphant having just got her masters degree and overcoming adversity since she grew up as a young black boy in the south. “Hmmmm,” I thought. “Something tells me You didn’t write this.”

                  • jadero
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    I’m interested in this primarily as an English teacher. I need to be able to spot the linguistic tics and errors and recognize where it likely came from.

                    That might well turn out to be the Red Queen’s Race. It’s only a guess, but I suspect that competitive models, the advances resulting from competition, and the advances and experimentation associated with catching and correcting mistakes will mean that you’ll generally be playing catch up.

                    Frankly, I don’t even have anything more useful to offer than the unrealistic suggestion that all such work be performed in class using locked down word processing appliances or in longhand. It may be that the days of assigning unsupervised schoolwork are over.

                • Juno@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Oh also, regarding compartmentalized language models in the brain, profanity and swearing is stored in muscle memory, not the front lobe. That’s why if u lose the power of speech due to stroke, you’d still be able to shout profanity of some kind.

                  • jadero
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    Hah! Yes, I was aware of that. I only hope that should I be so afflicted that that still applies when using some of those words in the gloriously flexible ways they are capable of. :)

    • Rhaedas@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Hallucinations come from the weighting of training to come up with a satisfactory answer for the output. Future AGI or LLMs guided by such would look at the human responses and determine why the answers weren’t good enough, but current LLMs can’t do that. I will admit I don’t know how the longer memory versions work, but there’s still no actual thinking, it’s possibly just wrapping up previous generated text along with the new requests to influence a closer new answer.

      • jadero
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I wonder how creative these things are. Somewhere between “hallucination” and fully verifiable correct answers based on current knowledge, there might be a “zone of creativity.”

        I would argue that there is no such thing as something completely from nothing. Every advance builds on work that came before, often combining bodies of knowledge in disparate fields to discover new insights.

        • Rhaedas@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.

          What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…

          • jadero
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            I like your comment regarding the (usually) thoughtful effort that goes into creative endeavours. I know that there are those who claim that deliberate effort is antithetical to the creative process, but even serendipitous results have to be deliberately examined and refined. Until a system can say “oh, that’s interesting enough to investigate further” I’m not convinced that it can be called creative. In the context of LLMs, I think that means giving them access to their own outputs in some way.

            As for the dangers, I’m pretty sure that most of us, even those of us looking for danger, will not recognize it until we see it. That doesn’t mean we should just barrel ahead, though. Just the opposite. That’s why we need to move slowly. Our reflexes and analytical capabilities are pretty slow in comparison to the potential rate of development.

            • Rhaedas@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              In the context of LLMs, I think that means giving them access to their own outputs in some way.

              That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.

              • jadero
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Thanks, I didn’t know that. I guess I need to broaden my reading.

                • Rhaedas@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 year ago

                  It changes so much so fast. For a video source to grasp the latest stuff I’d recommend the Youtube channel “AI Explained”.

    • Gnome Kat@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      There is a flip side of the coin for #2 and its is something no one really wants to talk about. People actually get very emotional if you even suggest it. Which is the consciousness issue.

      Basically if the claim is that machine learning is on the right path to explaining how our minds work, which is a claim im inclined to agree with, then it seems unreasonable to dismiss the idea that deep neural networks now might have some kind qualitative conscious experience. I am not going to say for sure they do have conscious experience, they might not, but I think its wholly unreasonable to dismiss the possibility out of hand.

      As it stands we don’t have any well accepted theories on how consciousness arises at all. The issue is actually something science is not well equipped to address in its current state, we need fundamental philosophy to address it (im talking academic philosophy not woo woo crystals shit i shouldn’t need to say this).

      The best we can do now is try to find what are referred to as “neural correlates of consciousness” which is the correlation between neural states and conscious experiences but we don’t have a way of explaining why those activity patterns produce the experiences they do. We have theories on how matter acts, not what matter experiences. There is no connection between information processing and experience, that link just does not exist in our theoretical frameworks and it’s unlikely to go away with just more understanding of the details on how information is being processed in the brain. We need some way to link types of information processing to types of conscious experience, closest we have is stuff like integrated information theory but its not fully accepted.

      • jadero
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I agree that consciousness is a sensitive issue. I haven’t refined my thinking on it far enough to really argue my position, but I suspect that that it’s just one more aspect of the “mind of the gaps”. As with the various “god of the gaps” creationist arguments, I think that consciousness will end up falling into that same dead end. That is, we’ll get far enough to start feeling comfortable with the idea that gaps are only gaps in the record or our understanding, not failures of theory.

        Some current discussion of the matter is already starting to set up the relevant boundaries. We have ourselves as conscious beings. Over time we’ve come to accept that those with mental and intellectual disabilities are conscious. Some attempts to properly define consciousness leave us no choice but to conclude that consciousness is like intelligence in that there are degrees of consciousness. That, in turn, opens the door to the possibility of consciousness in everything from crows and octopuses to butterflies and earthworms to bacteria and even plants.

        I find it particularly interesting that the “degrees of consciousness” map pretty nicely to the “degrees of intelligence”.

        So if you were to ask me today if my old Fidelity chess computer was conscious, I’d say “to a low degree”. Not because I claim any kind of special knowledge, but because I’d be willing to bet a small amount of money that we’ll get to the point where the question can actually be answered with confidence and that the answer would likely be “to a low degree”.

        To your discussion of the neural correlates of consciousness, my opinion is that making the claim that this still tells us nothing about “what material experiences” is a step into the “mind of the gaps”. I’m happy enough to have those correlates as evidence that information processing and consciousness cannot be kept separate.