I started my IT career in 2011, I have enjoyed it, I have got to do a lot of interesting stuff and meet interesting people, I will treasure those memories forever.
But, starting with crypto turing general computing from being:
“Wow, this machine can run so many apps at the same time!” or “Holy shit, those graphics look epic!” or “Amazing, this computer has really sped up that annoying task!”
To being:
Yo! Look at how many numbers I can generate!
That brought down my enthusiasm severely, but hey, figuring out solutions to problems was still fun.
Then came AI/LLMs.
And with it, a mountain of slop.
Finding help about an issue has gone from googling and reading help articles written by something with an actual brain to mostly being rephrased manuals that only provide working answers to semi standard answers.
Add to that a general push to us AI in anything and everything, no matter how little relevance it holds for the task at hand.
I also remember how AI was sold to the us at first, we were promised to do away with boring paperwork, so we could get on with our actual job.
What did we get? An AI that takes the fun and creative parts, leaving the paperwork for the workers.
We got an AI that we need to expect to be stealing our work and data at every point, giving us shit work back, while being told that we should applaude it and be grateful for it.
And the worst thing, the worst thing is that people seem happy with it. I keep getting requests to buy another Copilot license or asking for another AI service to be added to our tenant, I am sick of it!
We got an AI that somehow has slithered onto the golden throne and can’t be questioned.
I am not able to leave the tech market at this time, but I will focus on more tangible hobbies going forward.
This year, I have given myself a project, I will try to build a model railway in a suitcase. That will be a Z-scale tiny world in a suitcase.
I have never done anything remotely like it, but I feel like I need something physical to take my mind off tech.
Sorry for the rant, but I just came off of a high from realizing and putting words to my feelings.


That’s on me, I meant the equivalent of a “trust me bro” , in this case an anecdotal “me and the people I know all say…”
Yes, in the context you provided it makes sense, as a response to my question which specified examples of larger projects/workflows, it does not.
Im not here to argue either, I asked a specific question and your answer didn’t really address any of it, i was just pointing that out.
I too find it frustrating but it seems for different reasons.
I really really dislike the way it’s being sold as a solution for things it’s in no way a solution for.
They do certain things fine, good even, but blanket statements like “their code is great” without appropriate qualifiers is contributing to the validation of these bullshit sales-oriented claims of task competency.
1: agreed
2: then I think you are missing the fundamental limitations of the current approaches, but we can agree to disagree on this.
3: see 2
I agree with jobs on the chopping block, though i think that’s in large part due to poor due diligence and planing by management, but that’s nothing new, the same thing has and is still happening with offshoring (throwing more people at a problem generally won’t solve design and governance issues).
I also think the current systems aren’t capable of being a viable replacement for anything above junior level stuff, if that ( not that that doesn’t present it’s own problems )
I think the difference in opinion comes from my belief that LLM’s and the current tooling around them aren’t fundamentally capable of replacing existing resources, not that they just don’t have the power yet.
Putting increasing large compute in a calculator won’t magically make it a spreadsheet application.
To your point then: what are your thoughts on this project? https://github.com/anthropics/claudes-c-compiler I’m not particularly interested in this use case right now but it seems more in line with what you’re interested in.
I think it shows a lot of limitations but also a lot of potential. I don’t personally think the AI needs to get the code perfect on the first go – it has to be compared to humans and we definitely don’t do that.
Yes, of course. I think it’s important to look passed the blowhards and think about what it’s actually doing: that is the perspective I’m trying to talk about this from.
My initial thoughts are that my original ask was this :
and the example you provided was a toy project used as a publicity stunt.
On the technical side i don’t know enough rust to be able to weigh in on the technical accuracy of the project.
The ability for current LLM’s to churn out something that looks relatively good at first glance isn’t my point of contention, most of us know it can do that.
I’m just looking for a single medium to large project that is successfully being used in production (close to production is also fine) that was created with significant LLM involvement.
There is so much talk around this, that the fact i haven’t come across any mention of a successful deliverable (in the context i mentioned) raises all sorts of red flags for me, personally.
I’m not trying to catch you out, it’s just that i haven’t seen one so i was wondering if you have, if you haven’t that’s fine, it’s not a trap.
Iterative progress is generally the way of things, but most non-trivial agentic workflows already work with iterative code generation and testing so expecting a correct solution at the end of that process is more reasonable than you would think.
The difference between people and LLM’s is the types of interactions you have with them, you can ask the LLM to explain why it did something, but if you’ve ever tried that I’m sure you can understand why it’s not the same as the kind of answers you’d get from a person.
As am i, I’m not against LLM usage, I’m against the pretense that it has capabilities it does not, in fact, have.
Selling something on the basis of it being able to do something it can’t do is where term “snake oil salesman” comes from.