How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

  • Zikeji
    link
    fedilink
    English
    arrow-up
    30
    ·
    6 months ago

    Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

    So it’s helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it’s not going to do the job itself.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      6 months ago

      So it’s helpful for saving time typing some stuff

      Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it’ll do the bulk of it, and then I just go through and edit out all little mistakes.

      And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they’re leaving out the part where they have to edit the output too.

      At the end of the day, we’ve had this technology for a while, it’s just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it’s just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.