According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted.

Thanks to vibe coding, a colloquialism for the practice of quickly writing code with the assistance of an LLM, anyone with a small amount of technical knowledge can churn out computer code and deploy software, even if they don’t fully review or understand all the code they churn out. But there’s a hidden cost. Vibe coding relies on vast amounts of open-source software, a trove of libraries, databases, and user knowledge that’s been built up over decades.

Open-source projects rely on community support to survive. They’re collaborative projects where the people who use them give back, either in time, money, or knowledge, to help maintain the projects. Humans have to come in and fix bugs and maintain libraries.

Archive: http://archive.today/sgl5M

  • towerful
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    16 hours ago

    Probably not relevant to the article, I had to rant. I’m drunk, and suffering!

    I’m trying the old vibe coding, except with actual specs. I feel like I have to. I hate it.

    I think refining the spec/prompt with Claude makes sense. I found it helped me crystallise my spec and highlight gaps & pitfalls.
    At which point, I should’ve just coded it.
    I’d have known what it does, and it would be exactly what I needed.
    But I figured I’d see what Claude could do.

    So, my “dev->staging->prod” (project isn’t in production state yet, thought it would be good to try AI on something) database migration system with a planning, apply and rollback stage was built by Claude.
    There are system tables that should migrate fully (but allow for review if they are structurally different) and there are data tables that should only alter schema (not affect data). It’s decently complex that it would take me a week or so to write and generate, but maybe I can spend a day or 2 writing a spec and seeing what clause can do.

    It wanted to use python, and told me that migra is outdated and tried to generate something that would do it all.
    I told it to use results (the migra replacement), and after convincing it that results was the actual library name and that it can produce schema differences (and telling it that it is a different API than migra cause it tried to use it as if it was migra, and… So much wasted time!), I finally got working code. And all the logs and CLI etc resulted in SUCCESS messages.
    Except that tables are named like “helloThere” were ignored by it, cause it hadn’t considered tables might have uppercase. So I got it to fix that. And it’s working code.

    It looks nicely complex with sensible file names.
    Looking at the code: there are no single responsibilities, no extensibility. It’s actually a fucking mess. Variables sent all over the place, things that should be in the current command context being randomly generated, config hard coded, randomly importing a function from another file (and literally the only place that other function is used) because… I don’t know.
    It’s just a bunch functions that does stuff, named be impressive, in files that are named impressively (ignoring the content). And maybe there are context related functions in the same file, or maybe there are “just does something that sounds similar” functions.

    The logging?
    Swallows actual errors, and gives an expected error messaged. I just want actual errors!

    It’s hard to analyse the code. It’s not that it doesn’t make sense from a single entry point. It’s more that “what does this function do” doesn’t make sense in isolation.
    “Where else might this be a problem” has to go to Claude, cause like fuck could I find it it’s probably in a functionally similar function with a slightly different name and parameters or some bullshit.

    If I didn’t know better, and looked at similar GitHub projects … Yeh, it seems appropriate.

    It is absolutely “manager pleasing complexity”.
    But it does work, after telling it how to fix basic library issues.

    Now that it works, I’m getting Claude to refactor it into something hopefully more “make sure functions are relevant to the class they are in” kinda thing. I have low expectations

    I don’t EVER want to have to maintain or extend Claude generated code.
    I have felt that all the way through this experiment.
    It looks right. It might actually work. But it isn’t maintainable.
    I’m gonna try and get it to be maintainable. There has to be a way.
    Maybe my initial 4-page spec accidentally said “then randomise function location”.

    I’m gonna try Claude for other bits and pieces.
    Maybe I’ll draw some inspiration from this migration project that Claude wrote (if I can find all the bits) and refactor it into something maintainable (now that I have reference implementations that seems to work, no matter how convolutedly spread they are)

    • thenextguy@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      15 hours ago

      I’m going to say something that might make you throw up in your mouth a little bit…

      I think you’re not supposed to care that you cannot maintain the code. That’s the ai’s job now.

      You are now a manager. You write the specs and complain when things don’t work right.

      It’s a different job entirely.

    • zoe@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      16 hours ago

      We use AI to help code at work. it does a good job at creating some boiler plate stuff, but I definitely wouldn’t use it to create super intricate stuff. It will create stuff that definitely doesn’t work and acts so confident about it.

      It’s really funny when you can literally see where it is calling incorrect functions and tell it no. It fixes it, and then 2 or 3 prompts later it goes back to calling those wrong functions again.

      • towerful
        link
        fedilink
        arrow-up
        2
        ·
        16 hours ago

        I haven’t experienced “2 or 3 prompts later” regression.
        I have found asking it to queue changes until I ask for it to work on the queue.
        Maybe ask it to produce a single file for review, or tell it how to modify a file (and why, it likes an explanation).
        But always stack up changes, ask it to review it’s queue of changes etc.
        Then ask it to do it in a one-er.
        Although, this is the first time claude said such a request will take a long time (instead of showing it’s working/thinking and doing it is 20 minutes).
        Maybe this is when it starts forgetting why it did things.

    • Chris@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      15 hours ago

      I don’t EVER want to have to maintain or extend Claude generated code.

      I think this is the crux of it. I’ve experimented with getting AI to fix things, create code blocks. It’s really impressive what it can do.

      Except, yeah, the code is an utter mess. On the surface it looks good, but when you dig into it it’s totally unmaintainable.

      I got it to write some Grok patterns for some logging software, mostly because there were so many variants on logs from some piece of software that it would have been a nightmare to do it all manually without missing something (and I’m lazy and wanted to see if it could be used for this).

      It did it and they work (after a few revisions). However it has created a separate pattern for every little variation. If I’d done it by hand I would have used more complex patterns, but less of them. As a result, any tiny little problem requires changing about four different patterns.