• msage
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    11 months ago

    Because it doesn’t have branches, it has neurons - and A LOT of them.

    Each of them is tuned by the input data, which is a long and expensive process.

    At the end, you hope your model has noticed patterns and not doing stuff at random.

    But all you see is just weights on countless neurons.

    Not sure I’m describing it correctly though.

    • btaf45@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      11 months ago

      Each of them is tuned by the input data, which is a long and expensive process.

      But surely the history of how this data is tuned/created is kept track of. If you want to know how a specific value is created you ideally should be able to reference the entire history of how it changed over time.

      I’m not saying this would be easy, but you could have people whose entire job is to understand this and with unlimited amounts of time to do so if it is important enough. And it seems like it would be important enough and such people would be very valuable.

      Now that AI is first taking off is exactly the time to establish the precedent that we do not let it escape the understanding and control of humans.

      • BreadstickNinja@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        The issue is that the values of the parameters don’t correspond to traditional variables. Concepts in AI are not represented with discrete variables and quantities. A concept may be represented in a distributed way across thousands or millions of neurons. You can look at each individual neuron and say, oh, this neuron’s weight is 0.7142, and this neuron’s weight is 0.2193, etc., across all the billions of neurons in your model, but you’re not going to be able to connect a concept from the output back to the behavior of those individual parameters because they only work in aggregate.

        You can only know that an AI system knows a concept based on its behavior and output, not from individual neurons. And AI systems are quite like humans in that regard. If your professor wants to know if you understand calculus, or if the DMV wants to know if you can safely drive a car, they give you a test: can you perform the desired output behavior (a correct answer, a safe drive) when prompted? Understanding how an idea is represented across billions of parameters in an AI system is no more feasible than your professor trying to confirm you understand calculus by scanning your brain to find the exact neuronal connections that represent that knowledge.

      • Traister101@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Well the thing is that good AI models aren’t manually tuned. There’s not some poor intern turning a little knob and seeing if it’s slightly more accurate, it happens on its own. The more little knobs there are the better the model is. This means essentially you have no idea how any knob ultimately effects every other knob cause there’s thousands of them and any little change can completely change something else.

        Look at “simple” AI for playing like Super Mario World https://youtu.be/qv6UVOQ0F44 shits already pretty complicated and this thing is stupid. It’s only capable of playing the first level

      • msage
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        “Rumors claim that GPT-4 has 1.76 trillion parameters”

        https://en.m.wikipedia.org/wiki/GPT-4

        I’m not sure even unlimited time would help understand what’s really going on.

        You could build another model to try to decipher te first, but how much could you trust it?

        • wikibot@lemmy.worldB
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Here’s the summary for the wikipedia article you mentioned in your comment:

          Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI, and the fourth in its series of GPT foundation models. It was initially released on March 14, 2023, and has been made publicly available via the paid chatbot product ChatGPT Plus, and via OpenAI's API. As a transformer-based model, GPT-4 uses a paradigm where pre-training using both public data and "data licensed from third-party providers" is used to predict the next token. After this step, the model was then fine-tuned with reinforcement learning feedback from humans and AI for human alignment and policy compliance.: 2 Observers reported that the iteration of ChatGPT using GPT-4 was an improvement on the previous iteration based on GPT-3.5, with the caveat that GPT-4 retains some of the problems with earlier revisions. GPT-4 is also capable of taking images as input on ChatGPT. OpenAI has declined to reveal various technical details and statistics about GPT-4, such as the precise size of the model.

          article | about

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        imagine you have a simple equation:

        ax + by + cz

        The machine learning part finds values for the coefficients a, b and c.

        Even if you stepped through the code, you will see the equation be evaluated just fine, but you still won’t know why the coefficients are the way they are. Oh and there are literally billions of coefficients.