• conciselyverbose@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 days ago

    Yes, mentioning AI in a context where it adds literally nothing to the conversation is a bad thing. AI is exactly none of GitHub’s value.

    LLMs are dogshit tech that doesn’t scale.

    • smooth_tea@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      2 days ago

      AI is exactly none of GitHub’s value.

      Because nobody is using Co-pilot happily?

      LLMs are dogshit tech that doesn’t scale.

      It’s irrelevant whether it scales or not. Your insinuation that this is all a pointless endeavor destined for failure is incredibly short-sighted.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Zero people are on GitHub because of Copilot.

        It’s not irrelevant whether it scales. It’s astronomical power use to lower both of software development quality and efficiency, with no path to getting better because brute force doesn’t work.

        We’ll eventually have actual AI. It absolutely will not come from an LLM.

        • smooth_tea@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          2 days ago

          Zero people are on GitHub because of Copilot.

          Zero people were on the internet because of Google. Zero people were into photography because of digital sensors. And zero people took the trip to work because of cars.

          Ridiculous argument.

          It’s not irrelevant whether it scales.

          It is irrelevant because it is a straw man. They didn’t say LLM, they said AI.

          Like I said, it’s really trendy to go “LLM’s just predict the next letter!”, but please get over yourself, this is not an insightful argument but just chest beating and pedantry over nothing.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            2 days ago

            LLMs are what you’re advocating for, because it’s what Copilot is. It doesn’t lead to better software, it doesn’t lead to more efficient development, and it doesn’t have a meaningful path to improvement because it’s already obscenely far beyond diminishing returns. All for obscene energy draws to zero benefit.

            • smooth_tea@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              2 days ago

              LLMs are what you’re advocating for, because it’s what Copilot is.

              I was afraid you’d say this, but I gave you the benefit of the doubt. It doesn’t matter what copilot is, you tripped over the word “AI”, then reduced it to LLM’s, and are now full circle by saying copilot is an LLM.

              I think my original response to you was that you were short sighted in your argument, and this latest comment just underlines that you have issues with what AI is now, not what it is becoming.

              It doesn’t lead to better software, it doesn’t lead to more efficient development

              Eventually it will be all we need to write software.

              All for obscene energy draws to zero benefit.

              Oldmanyellsatcloud.jpg

              I’ve gotten plenty benefit out of LLM’s, and millions of people with me, maybe you’re doing it wrong? Why do you think this absurd amount of power usage can be justified? Don’t you think interest and actual usage are the reason?

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                2 days ago

                That’s the “AI” GitHub uses, that they’re referring to, that is a stronger reason not to use their platform than to use it.

                Every attempt to demonstrate that LLMs improve productivity in software development fails miserably and shows that it doesn’t do that. It’s not capable of doing that.

                The entire point of code is to clearly and effectively communicate what you want. It is easier and more efficient than using natural language, once you learn. If you can’t communicate your desires through the language of code, you will do a worse job with natural language, because natural language is imprecise by definition.

                It can’t be justified. It’s a pure speculative bubble that every company in the space is burning ridiculous piles of cash on in hopes that they eventually land a moonshot that makes their product better than dogshit.

                • smooth_tea@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  2 days ago

                  That’s the “AI” GitHub uses, that they’re referring to, that is a stronger reason not to use their platform than to use it.

                  You’re missing the point. You tripped over the word “AI”, then equate it to just an LLM and on top of that you claim that nobody is getting any use out of it. Not only is your argument circular, it’s also based on a false premise.

                  The entire point of code is to clearly and effectively communicate what you want.

                  No, the point of code is to arrive at software that does what you want. Currently, we have to describe what our software needs to do, then mangle code into doing what we want. AI, and even an LLM, has the ability to take over everything after we provide the description of what we want and even write the tests to make sure it does that. The billions spent on bug hunting, quality assurance, acceptance testing and liability cases clearly show that it is not easier than natural language. Something we start learning before we’re even born.

                  But copilot and others are not just tools to spit out code, they are a replacement for search engines with the ability to not only instantly provide you with a relevant answer, but also to explain their reasoning with the ability to go back and forth about details that would otherwise take you through multiple Google searches and trawling through different websites and fora to maybe distill an answer. Clearly it goes without saying that this interface with what “the internet” knows is a major step forward to how we find and apply relevant information.

                  natural language is imprecise by definition

                  But so is code unless we write it to be precise. And it is far more easy and productive to define what that precision needs to be than it is to write and test. A project without unit tests is half the price of a project with tests, that alone should tell you something about the idea of precise code being easy. Knowing full well that every bit of software starts by defining it in natural language anyway. It goes without saying that if code and test generation is automated after that initial step, productivity is increased massively.

                  Every attempt to demonstrate the LLMs improve productivity in software development fails miserably and shows that it doesn’t do that. It’s not capable of doing that.


                  The main finding was that programmers who did not use AI completed the task in 160.89 minutes (2.7 hours) on average, whereas the programmers who had AI assistance completed the job in 71.17 minutes (1.2 hours). The difference between the two groups was statistically significant at the level of p = 0.0017.

                  https://www.nngroup.com/articles/ai-programmers-productive/

                  Just one example by the way…

                  • conciselyverbose@sh.itjust.works
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    2 days ago

                    You’re missing the point. I’m responding specifically to them celebrating trash tech.

                    No, AI does not have that ability, and likely will not for an extremely long time.

                    No, code is not imprecise. It does exactly what you tell it to every time.

                    The study you’re linking completely ignores code quality. In the real world, you get outputs faster, then spend 100x longer than you saved to get to a worse quality output nobody, including the developer, understands.

                    If you don’t fully understand literally every single line you submit, it’s bad code. By definition.