• @[email protected]
      link
      fedilink
      English
      86 months ago

      I think we’re still a bit far off from that. No doubt the models will be quite good, but they won’t be anywhere near general intelligence.

      • acannan
        link
        56 months ago

        Cross modality is what is missing. We have models that can produce text, hear things, and see things really really well. Facilitating communication between these individual models, and probably creating some executive model to utilize them, is what’s missing. We’re probably still a few years from beginning to touch general intelligence

    • @[email protected]
      link
      fedilink
      English
      76 months ago

      It probably won’t happen until we move to new hardware architectures.

      I do think LLMs are a great springboard for AGI, but I don’t think the current hardware allows for models to cross the hump to AGI.

      There’s not enough recursive self-interaction in the network to encode nonlinear representations. So we saw this past year a number of impressive papers exhibiting linear representations of world models extrapolated from training data, but there hasn’t yet been any nonlinear representations discovered and I don’t think there will be.

      But when we switch to either optoelectronics or colocating processing with memory at a node basis, that next generation of algorithms taking advantage of the hardware may allow for the final missing piece of modern LLMs in extrapolating data from the training set, pulling nonlinear representations of world models from the data (things like saying “I don’t know” will be more prominent in that next generation of models).

      From there, we’ll quickly get to AGI, but until then I’m skeptical that classical/traditional hardware will get us there.