I didn’t say never copy and paste. I’m saying when you push a commit you should understand what all the LOC in that commit do (not counting vendored dependencies). If you don’t understand how something works, like crypto (not sure what Hamilton or Euler refers to in this context), ideally you would use a library. If you can’t, you should still understand the code sufficiently well to be able to explain how it implements the underlying algorithm. For example if you’re writing a CRC function you should be able to explain how your function implements the CRC operations, even if you don’t have a clue why those operations work.
Ethan
Principal Engineer for Accumulate
- 11 Posts
- 261 Comments
I said you need to understand what the code you wrote (as in, LOC that git blame will blame on you) does. Not that you need to fully understand what the code it calls does. It should be pretty obvious from context that I’m referring to copy-pasting code from stack overflow or an LLM or whatever without knowing what it does.
If you are submitting work, you should understand how the code you’re submitting works. Sure, you don’t have to know exactly how the code it calls works, but if you’re submitting code and there’s a block of code and you have no clue how that block works, that’s a problem.
There’s a huge difference between copy-pasting code you don’t understand and using a library with the assumption that the library does what it says on the tin. At the very least there’s a clear boundary between your code and not-your-code.
Are you seriously trying to equate “I don’t know which instructions this code is using” to “I copied code I don’t understand”? Are you seriously trying to say that someone who doesn’t know how to write
x = a + b
in assembly doesn’t understand that code?
If you’re adding code you don’t understand to a production system you should be fired
Edit: I assumed it was obvious from context that I’m referring to copy-pasting code from stack overflow or an LLM or whatever without knowing what it does but apparently that needs to be said explicitly.
I guess I just don’t see enough memes to have picked up on that
Marketing. People expect to see different things on a website vs Twitter/X so the same content won’t perform the same on each. So for a business it makes sense to post different things on your website vs Twitter/X.
I’m not sure what to tell you. I just don’t see what you do. And I never bother to look at a meme close enough to notice the kind of details the other user pointed out.
nasm
is an assembler though, not a ‘languages’That’s like saying “clang is a compiler though, not a language”. It’s correct but completely beside the point. Unless you’re writing a compiler, “cross platform assembler” is kind of an insane thing to ask for. If want to learn low level programming, pick a platform. If you are trying to write a cross-platform program in assembly, WHY!? Unless you’re writing a compiler. But even then, in this day and age using a cross-platform assembler is still kind of an insane way to approach that problem; take a lesson from decades of progress and do what LLVM did: use an intermediate representation.
I’ve genuinely never had a problem with it. If something is wrong, it was always going to be wrong.
Have you worked on a production code base with more than a few thousands of lines of code? A bug is always going to be a bug, but 99% of the time it’s far harder to answer “how is this bug triggered” than it is to actually fix the bug. How the bug is triggered is extremely important.
Why is it preferable to have to write a bunch of bolierplate than just deal with the stacktrace when you do encounter a type error?
If you don’t validate types you can easily run into a situation where you write a value to a variable with the wrong type, and then some later event retrieves that value and tries to act on it and throws an exception. Now you have a stack trace for the event handler, but the actual bug is in the code that set the variable and thus is not in your stack trace. Maybe the stack trace is enough that you can figure out which variable caused the problem, and maybe it’s obvious where that variable was set, but that can become very difficult very fast in a moderately complex application. Obviously you should write tests, but tests will never catch every weird thing a program might do especially when a human is involved. When you’re working on a moderately large and complex project that needs to have any degree of reliability, catching errors as early as possible is always better.
And relying on runtime validation is a horrific way to write production code
Assembly languages are always architecture specific. Thats kind of their defining feature. Assembly is readable machine code.
“Assume it’s a map and treat like a map and then catch the type error if it’s not.” Paraphrased from actual advice by Guido on how you should write Python. Python isn’t a bad language but the philosophy that comes along with it is so fucked.
What I mean is, from the perspective of performance they are very different. In a language like C where (p)threads are kernel threads, creating a new thread is only marginally less expensive than creating a new process (in Linux, not sure about Windows). In comparison creating a new ‘user thread’ in Go is exceedingly cheap. Creating 10s of thousands of goroutines is feasible. Creating 10s of thousands of threads is a problem.
Also, it still uses kernel threads, just not for every single goroutine.
This touches on the other major difference. There is zero connection between the number of goroutines a program spawns and the number of kernel threads it spawns. A program using kernel threads is relying on the kernel’s scheduler which adds a lot of complexity and non-determinism. But a Go program uses the same number of kernel threads (assuming the same hardware and you don’t mess with GOMAXPROCS) regardless of the number of goroutines it uses, and the goroutines are cooperatively scheduled by the runtime instead of preemptively scheduled by the kernel.
Key point: they’re not threads, at least not in the traditional sense. That makes a huge difference under the hood.
Really? Huh, TIL. I guess I’ve just never run into a situation where that was the bottleneck.
Definitely not a guarantee, bad devs will still write bad code (and junior devs might want to let their seniors handle concurrency).
I’ve had success with Claude, but there’s always a layer of separation. I ask it to do something, read what it produced, and decide if it’s garbage or not. And rewrite or discard as necessary. Though counting by LOC mainly I’ve used it for writing tests.