• @[email protected]
    link
    fedilink
    English
    -928 days ago

    The people here don’t get LLMs and it shows. This is neither surprising nor a bad thing imo.

      • @[email protected]
        link
        fedilink
        English
        328 days ago

        LLMs operate using tokens, not letters. This is expected behavior. A hammer sucks at controlling a computer and that’s okay. The issue is the people telling you to use a hammer to operate a computer, not the hammer’s inability to do so

          • @vcmj
            link
            028 days ago

            It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.

    • Comrade Rain
      link
      fedilink
      3
      edit-2
      28 days ago

      People who make fun of LLMs most often do get LLMs and try to point out how they tend to spew out factually incorrect information, which is a good thing since many many people out there do not, in fact, “get” LLMs (most are not even acquainted with the acronym, referring to the catch-all term “AI” instead) and there is no better way to make a precaution about the inaccuracy of output produced by LLMs –however realistic it might sound– than to point it out with examples with ridiculously wrong answers to simple questions.

      Edit: minor rewording to clarify