• perishthethought@lemm.ee
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    5
    ·
    2 months ago

    I don’t normally say this, but the AI tools I’ve used to help me write bash were pretty much spot on.

    • marduk@lemmy.sdf.org
      link
      fedilink
      arrow-up
      25
      arrow-down
      5
      ·
      2 months ago

      Yes, with respect to the grey bearded uncles and aunties; as someone who never “learned” bash, in 2025 I’m letting a LLM do the bashing for me.

      • SpaceNoodle@lemmy.world
        link
        fedilink
        arrow-up
        41
        arrow-down
        1
        ·
        2 months ago

        Until the magic incantations you don’t bother to understand don’t actually do what you think they’re doing.

        • embed_me
          link
          fedilink
          arrow-up
          41
          ·
          2 months ago

          Sounds like a problem for future me. That guy hates me lol

        • MBM@lemmings.world
          link
          fedilink
          arrow-up
          14
          ·
          2 months ago

          I wonder if there’s a chance of getting rm -rf /* or zip bombs. Those are definitely in the training data at least.

          • furikuri
            link
            fedilink
            arrow-up
            3
            ·
            2 months ago

            The classic rm -rf $ENV/home where $ENV can be empty or contain spaces is definitely going to hit someone one day

        • arendjr
          link
          fedilink
          arrow-up
          11
          arrow-down
          1
          ·
          edit-2
          2 months ago

          In fairness, this also happens to me when I write the bash script myself 😂

        • kameecoding@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          2 months ago

          Yes, I have never wrote a piece of code that didn’t do what I thought it would before LLMs, no sir.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      edit-2
      2 months ago

      Yeah, an LLM can quickly parrot some basic boilerplate that’s showed up in its training data a hundred times.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      For building a quick template that I can tweak to my needs, it works really well. I just don’t find it to be an intuitive scripting language.

    • ewenak@jlai.lu
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      If When the script gets too complicated, AI could also convert it to Python.

      I tried it once at least, and it did a pretty good job, although I had to tell it to use some dedicated libraries instead of calling programs with subprocess.