• GetOffMyLan
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 months ago

    LLMs are general purpose, basically for demonstration purposes. This is basically generation 0.

    Once they can optimize the training process more, which is something being heavily researched, you’ll be able to create ones for specific languages or even frameworks.

    Dedicated hardware will be another huge boost.

    Then they’ll start to be amazing.

    • TheOneCurly@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Only time will tell I guess, but it seems to me like they’ve trained what they can train, it stopped getting better a while ago, and the core issues of reliability are unsolvable problems.

      • GetOffMyLan
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        I think the current issue is if you train something to write code and poetry and recipes etc. it’s going to reduce your accuracy at each task.

        But as you say we’ll see.