• Kogasa
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    It’s obviously not “just” luck. We know LLMs learn a variety of semantic models of varying degrees of correctness. It’s just that no individual (inner) model is really that great, and most of them are bad. LLMs aren’t reliable or predictable (enough) to constitute a human-trustable source of information, but they’re not pure gibberish generators.

    • Veraticus@lib.lgbtOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      No, it’s true, “luck” might be overstating it. There’s a good chance most of what it says is as accurate as the corpus it was trained on. That doesn’t personally make me very confident, but ymmv.