This article nails it. Just because LLMs donât deliver flawless code, that doesnât mean you shouldnât use their help. That seems completely short-sighted to me. Just donât rely on just one. And donât expect that they will provide you the exact solution to your prompt at the first attempt. Exactly as it often happens when collaborating with another human being.
âJust because code looks good and runs without errors doesnât mean itâs actually doing the right thing. No amount of meticulous code reviewâor even comprehensive automated testsâwill demonstrably prove that code actually does the right thing. You have to run it yourself!
Proving to yourself that the code works is your job. This is one of the many reasons I donât think LLMs are going to put software professionals out of work.
LLM code will usually look fantastic: good variable names, convincing comments, clear type annotations and a logical structure. This can lull you into a false sense of security, in the same way that a gramatically correct and confident answer from ChatGPT might tempt you to skip fact checking or applying a skeptical eye.
The way to avoid those problems is the same as how you avoid problems in code by other humans that you are reviewing, or code that youâve written yourself: you need to actively exercise that code. You need to have great manual QA skills.
A general rule for programming is that you should nevertrust any piece of code until youâve seen it work with your own eyeâor, even better, seen it fail and then fixed it.â
https://lnkd.in/dVV7knTD
#AI #GenerativeAI #SoftwareDevelopment #Programming #PromptEngineering #LLMs #Chatbots
@[email protected] Oh fuck off. Why the fuck should we use these bullshit generators only to have to fine toothcomb the garbage they produce? #FuckAI
@joachim: You have every right to not use LLMs. Personally, I find them a great help for improving my productivity. Every person has its own reasons for using or not using generative AI. Nevertheless, Iâm afraid that this technology - like many other productivity-increasing technologies - will become a matter of fact in our daily lifes. The issue here is how best to adapt it to our own advantage.Open-source LLMs should be preferred, of course. But I donât think that mere stubbornness is a very good strategy to deal with new technology.
âIf we donât use AI, we might be replaced by someone who will. What company would prefer a tech writer who fixes 5 bugs by hand to one who fixes 25 bugs using AI in the same timeframe, with a âgood enoughâ quality level? Weâve already seen how DeepSeek AI, considered on par with ChatGPTâs quality, almost displaced more expensive models overnight due to the dramatically reduced cost. What company wouldnât jump at this chance if the cost per doc bug could be reduced from $20 to $1 through AI? Doing tasks more manually might be a matter of intellectual pride, but weâll be extinct unless we evolve.â
https://idratherbewriting.com/blog/recursive-self-improvement-complex-tasks
@[email protected] Climate change, you fuckhead.
@[email protected] Just because Silicon Valley companies over-engineer their models, that doesnât mean it must be necessarily so⊠Look at DeepSeek: https://github.com/deepseek-ai/open-infra-index/blob/main/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md
Thanks for this thoughtful and nuanced opinion.