• GenChadT
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    2 months ago

    That’s my problem with “AI” in general. It’s seemingly impossible to “engineer” a complete piece of software when using LLMs in any capacity that isn’t editing a line or two inside singular functions. Too many times I’ve asked GPT/Gemini to make a small change to a file and had to revert the request because it’d take it upon itself to re-engineer the architecture of my entire application.

    • hisao@ani.social
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      2 months ago

      I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it’s needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.

      • GenChadT
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        It’s an issue of scope. People often give the AI too much to handle at once, myself (admittedly) included.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Are you using Copilot in agent mode? That’s where it breaks shit. If you’re using it in ask mode with the file you want to edit added to the chat context, then you’re probably going to be fine.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 months ago

        It’s a lot of people not understanding the kinds of things it can do vs the things it can’t do.

        It was like when people tried to search early Google by typing plain language queries (“What is the best restaurant in town?”) and getting bad results. The search engine had limited capabilities and understanding language wasn’t one of them.

        If you ask a LLM to write a function to print the sum of two numbers, it can do that with a high success rate. If you ask it to create a new operating system, it will produce hilariously bad results.