I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • JackbyDev
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Joel Haver has a sketch in which one person in a group laughs at an inside joke from a trip they didn’t go on. When pressed I think they say something like they laughed because everyone else was. As someone who has been in this situation, it’s true. Even though I don’t understand the specific reference being made, it’s usually being done in a funny manner such that the story telling is enjoyable and humorous. Or I’m able to use context clues to guess what they might be joking about and it’s funny, even if my understanding is off.