• @JDubbleu
      link
      2
      edit-2
      5 months ago

      That was a pretty interesting read. However, I think it’s attributing correlation and causation a little too strongly. The overall vibe of the article was that developers who use Copilot are writing worse code across the board. I don’t necessarily think this is the case for a few reasons.

      The first is that Copilot is just a tool and just like any tool it can easily be misused. It definitely makes programming accessible to people who it would not have been accessible to before. We have to keep in mind that it is allowing a lot of people who are very new to programming to make massive programs that they otherwise would not have been able to make. It’s also going to be relied on more heavily by those who are newer because it’s a more useful tool to them, but it will also allow them to learn more quickly.

      The second is that they use a graph with an unlabeled y-axis to show an increase in reverts, and then never mention any indication of whether it is raw lines of code or percentage of lines of code. This is a problem because copilot allows people to write a fuck ton more code. Like it legitimately makes me write at least 40% more. Any increase in revisions are simply a function of writing more code. I actually feel like it leads to me reverting a lesser percentage of lines of code because it forces me to reread the code that the AI outputs multiple times to ensure its validity.

      This ultimately comes down to the developer who’s using the AI. It shouldn’t be writing massive complex functions. It’s just an advanced, context-aware autocomplete that happens to save a ton of typing. Sure, you can let it run off and write massive parts of your code base, but that’s akin to hitting the next word suggestion on your phone keyboard a few dozen times and expecting something coherent.

      I don’t see it much differently than when high level languages first became a thing. The introduction of Python allowed a lot of people who would never have written code in their life to immediately jump in and be productive. They both provide accessibility to more people than the tools before them, and I don’t think that’s a bad thing even if there are some negative side effects. Besides, in anything that really matters there should be thorough code reviews and strict standards. If janky AI generated code is getting into production that is a process issue, not a tooling issue.

      • Anton Plankton
        link
        fedilink
        14 months ago

        @JDubbleu You raise a couple of interesting, maybe sometimes conflicting points.
        I’d like to comment on some:

        1. Accessibility:
          it’s a good thing - to a degree. It makes a big difference if you write your own little project, and if you know your bounds then AI is a great help. Knowing those bounds, however, might require skills that some AI users may not develop due to the abstraction AI provides.
        • Anton Plankton
          link
          fedilink
          14 months ago

          @JDubbleu
          2) The amount of code might be a big factor for the amount of bugs. A compiler won’t care, but if you have to read and maintain code, it’s best to have as little as possible. Once we let an AI take care of maintenance, we’re probably done for 😄

          1. A tool is only as good as its user. I second that. But then there are better or worse tools for any given job.
            I think this is the biggest issue with any tool, but AI in general: how to get people to use them responsibly.