When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    2 months ago

    also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      16
      ·
      2 months ago

      This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        2 months ago

        yes it is, and it doesn’t work.

        edit: too expand, if you’re generating data it’s an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won’t be in the set (because you didn’t know about them, so the network never sees any)

        • Terrasque@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 months ago

            from their own site:

            Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                2 months ago

                yeah. what’s your point. I said hallucinations are not a solvable problem with LLMs. You mentioned that alpaca used synthetic data successfully. By their own admissions, all the problems are still there. Some are worse.

        • Rivalarrival@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          edit-2
          2 months ago

          It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for it’s partner’s responses.

          It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from it’s partners, and it is instructed to minimize such feedback.

          It is not (yet) developing true intelligence. It is simply learning to bias it’s responses in such a way that it’s audience doesn’t immediately call it a liar.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            9
            ·
            2 months ago

            Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.

            • Rivalarrival@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              7
              ·
              2 months ago

              What other networks?

              It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.

              • LillyPip@lemmy.ca
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                2 months ago

                Have you tried doing this? I have, for *nearly a year, on the more ‘advanced’ pro versions. Yes, it will apologise and try again – and it gets progressively worse over time. There’s been a marked degradation as it progresses, and all the models are worse now at maintaining context and not hallucinating than they were several months ago.

                LLMs aren’t the kind of AI that can evaluate themselves and improve like you’re suggesting. Their logic just doesn’t work like that. A true AI will come from an entirely different type of model, not from LLMs.

                e: time. Wow, where did this year go?

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                4
                ·
                2 months ago

                here’s that same conversation with a human:

                “why is X?” “because y!” “you’re wrong” “then why the hell did you ask me for if you already know the answer?”

                What you’re describing will train the network to get the wrong answer and then apologize better. It won’t train it to get the right answer

                • Rivalarrival@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  2 months ago

                  I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.

                  “Johnny, what’s 2+2?”

                  “5?”

                  “No, Johnny, try again.”

                  “Oh, it’s 4.”

                  Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that “5” consistently gets him a “that’s wrong” response. So does “3”.

                  But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.

                  He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.

                  • vrighter@discuss.tchncs.de
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 months ago

                    turning jhonny into an llm does not work. because that’s not how the kid learns. kids don’t learn math by mimicking the answers. They learn math by learning the concept of numbers. What you just thought the llm is simply the answer to 2+2. Also, with llms there is no “next time” it’s a completely static model.