I may be a old man yelling at the clouds, but I still think programming skills are going nowhere. He seems to bet his future on his ‘predictions’

  • Dr. Taco@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 day ago

    The only thing AI will replace is small, standalone scripts/programs.

    For now. Eventually, I’d expect LLMs to be better at ingesting the massive existing codebase and taking it into account and either planning an approach or spitting out the first iteration of code. Holding large amounts of a language in memory and adding to it is their whole thing.

    • ulterno
      link
      fedilink
      English
      arrow-up
      6
      ·
      24 hours ago

      Hopefully we can fix the energy usage first.

      I have a feeling that the Dyson sphere will end up powering only the AI of the world and nothing else.

    • Badabinski@kbin.earth
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      24 hours ago

      They can context and predict contextually relevant symbols, but will they ever do so in a way that is consistently meaningful and accurate? I personally don’t think so because an LLM is not a machine capable of logical reasoning. LLMs hallucinating is just them making bad predictions, and I don’t think we’re going to fix that regardless of the amount of context we give them. LLMs generating useful code in this context is like asking for perfect meteorological reports with our current understanding of weather systems in my opinion. It feels like any system capable of doing what you suggest needs to be able to actually generate its own model that matches the task it’s performing.

      Or not. I dunno, I’m just some idiot software dev on the internet who only has a small amount of domain knowledge and who shouldn’t be commenting on shit like this without having had any coffee.

      • Dr. Taco@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        23 hours ago

        I don’t think they necessarily need logical reasoning. Solid enough test cases, automated test plans, and the ability to use trial & error rapidly means that they can throw a bunch of stuff at the wall and release whatever sticks.

        I’ve already seen some crazy stuff setup just with a customized model connected to a bunch of ADO pipelines that can shit out reasonably functional code, test, and release it autonomously. It’s front-ended by a chatbot, where the devs can provide a requested tweak in plan English and have their webapp updated in a few minutes. Right now, there’s a manual review/approval process in place, but this is using commodity shit in 2025. Imagine describing that scenario to someone in 2015 and tell me we can accurately predict the limitations there will be in 2035, '45, etc.

        I don’t think the industry’s disappearing anytime soon, but I do think we’ll see AI eating up some of the offshore/junior/mid-level work before I get to retire.