Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

    • @[email protected]
      link
      fedilink
      English
      212 hours ago

      It could be.

      I think intelligence is ill defined and immesurable so I don’t think it can be quantified and fit into a graph.

    • @[email protected]
      link
      fedilink
      318 hours ago

      I think you point out the main issue here. Wtf is intelligence as defined by this axis? IQ? Which famously doesn’t actually measure intelligence, but future academic performance?

    • Todd Bonzalez
      link
      fedilink
      117 hours ago

      Human intelligence created language. We taught it to ourselves. That’s a higher order of intelligence than a next word predictor.

      • @Sl00k
        link
        English
        212 hours ago

        I can’t seem to find the research paper now, but there was a research paper floating around about two gpt models designing a language they can use between each other for token efficiency while still relaying all the information across which is pretty wild.

        Not sure if it was peer reviewed though.

      • @[email protected]
        link
        fedilink
        217 hours ago

        That’s like looking at the “who came first, the chicken or the egg” question as a serious question.