• DreamySweet
    link
    fedilink
    English
    51 year ago

    I love when it tells you it can’t do something and then does it anyway.

    • 𝕊𝕚𝕤𝕪𝕡𝕙𝕖𝕒𝕟OPM
      link
      English
      5
      edit-2
      1 year ago

      Or when it tells you that it can do something it actually can’t, and it hallucinates like crazy. In the early days of ChatGPT I asked it to summarize an article at a link, and it gave me a very believable but completely false summary based on the words in the URL.

      This was the first time I saw wild hallucination. It was astounding.

      • @Phoenix
        link
        English
        21 year ago

        It’s even better when you ask it to write code for you, it generates a decent looking block, but upon closer inspection it imports a nonexistent library that just happens to do exactly what you were looking for.

        That’s the best sort of hallucination, because it gets your hopes up.

        • 𝕊𝕚𝕤𝕪𝕡𝕙𝕖𝕒𝕟OPM
          link
          English
          11 year ago

          Yes, for a moment you think “oh, there’s such a convenient API for this” and then you realize…

          But we programmers can at least compile/run the code and find out if it’s wrong (most of the time). It is much harder in other fields.