• commandar@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    7 months ago

    “Smarter” is the wrong way to look at it. LLMs don’t reason. They have limited ability to contextualize. They have no long term memory (in the sense of forming conclusions based on prior events).

    They potentially have access to more data than any individual human and are able to respond to requests for that data quicker.

    Which is a long way of saying that they can arguably be more knowledgeable about random topics, but that’s a separate measure from “smart,” which encompasses much, much more.