• expr
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    5
    ·
    8 months ago

    It’s got nothing to do with capitalism. It’s fundamentally a matter of people using it for things it’s not actually good at, because ultimately it’s just statistics. The words generated are based on a probability distribution derived from its (huge) training dataset. It has no understanding or knowledge. It’s mimicry.

    It’s why it’s incredibly stupid to try using it for the things people are trying to use it for, like as a source of information. It’s a model of language, yet people act like it has actual insight or understanding.

    • hatedbad@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      you’re so close, just why exactly do you think people are using it for these things it’s not meant for?

      because every company, every CEO, every VP, is pushing every sector of their companies to adopt AI no matter what.

      most actual people understand the limitations you list, but it’s the capitalists at the table that are making AI show up where it’s not wanted

      • expr
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        1
        ·
        8 months ago

        You do not understand how these things actually work. I mean, fair enough, most people don’t. But it’s a bit foolhardy to propose changes to how something works without understanding how it works now.

        There is no “database”. That’s a fundamental misunderstanding of the technology. It is entirely impossible to query a model to determine if something is “present” or not (the question doesn’t even make sense in that context).

        A model is, to greatly simplify things, a function (like in math) that will compute a response based on the input given. What this computation does is entirely opaque (including to the creators). It’s what we we call a “black box”. In order to create said function, we start from a completely random mapping of inputs to outputs (we’ll call them weights from now on) as well as training data, iteratively feed training data to this function and measure how close its output is to what we expect, adjusting the weights (which are just numbers) based on how close it is. This is a gross simplification of the complexity involved (and doesn’t even touch on the structure of the model’s network itself), but it should give you a good idea.

        It’s applied statistics: we’re effectively creating a probability distribution over natural language itself, where we predict the next word based on how frequently we’ve seen words in a particular arrangement. This is old technology (dates back to the 90s) that has hit the mainstream due to increases in computing power (training models is very computationally expensive) and massive increases in the size of dataset used in training.

        Source: senior software engineer with a computer science degree and multiple graduate-level courses on natural language processing and deep learning

        Btw, I have serious issues with both capitalism itself and machine learning as it is applied by corporations, so don’t take what I’m saying to mean that I’m in any way an apologist for them. But it’s important to direct our criticisms of the system as precisely as possible.

      • wahming@monyet.cc
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        8 months ago

        It’s not a database. God, how many years is it going to take before people understand just what LLMs are and are not capable of?