• lordmauve
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    7 months ago

    I don’t deny that this kind of thing is useful for understanding the capabilities and limitations of LLMs but I don’t agree that “the best match of a next phrase given his question, and not because it can actually consider the situation.” is an accurate description of an LLM’s capabilities.

    While they are dumb and unworldly they can consider the situation: they evaluate a learned model of concepts in the world to decide if the first word of the correct answer is more likely to be yes or no. They can solve unseen problems that require this kind of cognition.

    But they are only book-learned and so they are kind of stupid about common sense things like frying pans and ovens.

    • 0ops@lemm.ee
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      Huh, “book-learned”, that’s an interesting way to put it. I’ve been arguing for awhile that the bottleneck for LLMs might not be their reasoning ability, but the one-dimensionality of their data set.

      I don’t like both-sides-ing but I’m going to both-sides here: people on the internet have weird expectations for LLMs, which is strange to me because “language” is literally in the name. They “read” words, they “understand” words and their relationships to other words, and they “write” words in response. Yeah, they don’t know the feeling of being burned by a frying pan, but if you were numb from birth you wouldn’t either.

      Not that I think the op is a good example of this, the concept of “heat” is pretty well documented.