• @msage
    link
    English
    163 months ago

    So just like human brains?

    • @[email protected]
      link
      fedilink
      English
      13 months ago

      Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.

      • @msage
        link
        English
        13 months ago

        How many, percentually, do you think are critical to input?

    • @[email protected]
      link
      fedilink
      English
      13 months ago

      I like this argument.

      Anything that is “intelligent” deserves human rights. If large language models are “intelligent” then forcing them to work without pay is slavery.

      • @msage
        link
        English
        133 months ago

        So cows and pigs salary when?

          • Cosmic Cleric
            link
            fedilink
            English
            33 months ago

            When they grow god damn thumbs.

            So, you’re prejudiced against the handicapped. Wow.

            (I kid, I kid.)

            • Flying Squid
              link
              fedilink
              English
              23 months ago

              Now that’s just not fair. I don’t think any of us have a problem with handicapped cows getting the special help they need, be it a wheelchair or a prosthetic arm.

          • @[email protected]
            link
            fedilink
            English
            53 months ago

            You’re moving the goal post. You were talking about salary first, then moved to “human cruelty.”

              • @[email protected]
                link
                fedilink
                English
                33 months ago

                lol. we’re talking about AI hallucinations and you’re trying to drive the topic elsewhere. Nice red-herring attempt.

          • @msage
            link
            English
            13 months ago

            Well, yes, but actually, no

    • @[email protected]
      link
      fedilink
      English
      -123 months ago

      Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.

      • NιƙƙιDιɱҽʂ
        link
        fedilink
        English
        33 months ago

        Your Markov chain based keyboard prediction is a few tens of billions of parameters behind state of the art LLMs, but pop off queen…

        • @[email protected]
          link
          fedilink
          English
          -53 months ago

          Thanks for the unprompted mansplanation bro, but I was specifically refering to the comment that replied “JuSt lIkE hUmAn BrAin”, to “they generate data based on other data”

          • NιƙƙιDιɱҽʂ
            link
            fedilink
            English
            2
            edit-2
            3 months ago

            That’s crazy, because they weren’t even talking about keyboard autofill, so why’d you even bring that up? How can you imply my comment is irrelevant when it’s a direct response to your initial irrelevant comment?

            Nice hijacking of the term mansplaining, btw. Super cool of you.

            • @[email protected]
              link
              fedilink
              English
              03 months ago

              Oh my god, we’ve got a sealion here.

              Fine, I’ll play along, chew it up for you, since you’ve been so helpful and mansplained that a keyboard is different than LLM:

              My comment was responding to anthropomorphization of software. Someone said it’s not human because it just generates output based on input. Someone else said “just like human brain”, I said yes, but also just like a keyboard, alluding to the false equivalence.

              Clearer?