• 0 Posts
  • 93 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • For one thing: don’t bother with fancy log destinations. Just log to stderr and let your daemon manager take care of directing that where it needs to go. (systemd made life a lot easier in the Linux world).

    Structured logging is overrated since it means you can’t just do the above.

    Per-module (filterable) logging are quite useful, but must be automatic (use __FILE__ or __name__ whatever your language supports) or you will never actually do it. All semi-reasonable languages support some form of either macros-which-capture-the-current-module-and-location or peek-at-the-caller-module-name-and-location.


    One subtle part of logging: never conditionally defer a computation that can fail. Many logging APIs ultimately support something like:

    if (log_level >= INFO) // or <= depending on how levels are numbered
        do_log(INFO, message, arguments...)
    

    This is potentially dangerous - if logging of that level is disabled, the code is never tested, and trying to enable logging later might introduce an error when evaluating the arguments or formatting them into the message. Also, if logging of that level is disabled, side-effects might not happen.

    To avoid this, do one of:

    • never use the if-style deferring, internally or externally. Instead, squelch the I/O only. This can have a significant performance cost (especially at the DEBUG level), which is why the API is made in the first place.
    • ensure that your type system can statically verify that runtime errors are impossible in the conditional block. This requires that you are using a sane language and logging library.
    • run your testsuite at every log level, ensure 100% coverage of log code, and hope that the inevitable logic bug doesn’t have an unexpected dynamic failure.

  • o11ctoLinux MemesWithout fail..
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    To be fair, Secure Boot is actively hostile toward dual-booting in the first place. Worst of all, it might seem to work for a while then suddenly start causing errors sometime later.


  • o11ctoLinuxis Cinnamon sort of a shitshow?
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    From my experience, Cinnamon is definitely highly immature compared to KDE. Very poor support for virtual desktops is the thing that jumped out at me most. There were also some problems regarding shortcuts and/or keyboard layout I think, and probably others, but I only played with it for a couple weeks while limited to LiveCD.



  • As a rule, and Integrated Development Environment is less than the sum of its parts. Whether it is worth using depends on:

    • how many of the parts you actually need to use
    • to what extent you would benefit to use features of a part that are not exposed by the IDE
    • whether you find it easier to switch between different windows (involving tools with different UIs) for different tasks, or use the same window for different tasks.
    • whether the language you’re using is statically or dynamically typed, and whether it’s syntactically “simple” or complicated (this affects the difficulty of “manual” refactoring)


  • git is, however, bad with files that don’t have meaningful small binary diffs. And the page size for SQL binary files is small enough that that is in fact a problem (though this is not nearly as bad as already-compressed files).

    If you disable VACUUM that can give a rough idea of what git actually has to deal with. But you really shouldn’t.


  • Is it even meaningful to speak of an AST interpreter at that point? What does the actually-executed code look like?

    One thing to test would be to see if you get the massive slowdown when you exceed each cache size.

    And of course, outside of meta-compilation contexts, the (rough!) rule of thumb remains: AST walker is 10x slower than bytecode, which is 10x slower than compiling to native code.





  • o11ctoProgrammingHow do you feel about TypeScript?
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Obviously the actual programs are trivial. The question is, how are the tools supposed to be used?

    So you say to use deno? Out of all the tutorials I found telling me what tools to use, that wasn’t one of them (I really thought this “typescript” package would be the thing I was supposed to use; I just checked again on a hot cache and it was 1.7 seconds real time, 4.5 seconds cpu time, only 2.9 seconds if I pin everything to a single core). And I swear I just saw this week, people saying “seriously, don’t use deno”. It also doesn’t seem to address the browser use case at all though.

    In other languages I know, I know how to write 4 files (the fib library and 3 frontends), and compile and/or execute them separately. I know how to shove all of them into a single blob with multiple entry points selected dynamically. I know how to shove just one frontend with the library into a single executable. I know how to separately compile the library and each frontend, producing 4 separate artifacts, with the library being dynamically replaceable. I even know how to leave them as loose files and execute them directly (barring things like C). I can choose between these things all in a single codebase, since there are no hard-coded project filenames.

    I learned these things because I knew I wanted the ability from previous languages I’d learned, and very quickly found how the new language’s tools supported that.

    I don’t have that for TS (JS itself seems to be fine, since I have yet to actually need all the polyfill spam). And every time I try to find an answer, I get something that contradicts everything I read before.

    That is why I say that TS is a hopelessly immature ecosystem.



  • It’s because unicode was really broken, and a lot of the obvious breakage was when people mixed the two. So they did fix some of the obvious breakage, but they left a lot of the subtle breakage (in addition to breaking a lot of existing correct code, and introducing a completely nonsensical bytes class).


  • o11ctoGitWhy Git is hard
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    I’ve only ever seen two parts of git that could arguably be called unintuitive, and they both got fixes:

    • git reset seems to do 2 unrelated things for some people. Nowadays git restore exists.
    • the inconsistent difference between a..b and a...b commit ranges in various commands. This is admittedly obscure enough that I would have to look up the manual half the time anyway.
    • I suppose we could call the fact that man git foo didn’t used to work unintuitive I guess.

    The tooling to integrate git submodule into normal tree operations could be improved though. But nowadays there’s git subtree for all the people who want to do it wrong but easily.


    The only reason people complain so much about git is that it’s the only VCS that’s actually widely used anymore. All the others have worse problems, but there’s nobody left to complain about them.


  • o11ctoPythonPython developers won’t let go of Python 2
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    1 year ago

    Python 2 had one mostly-working str class, and a mostly-broken unicode class.

    Python 3, for some reason, got rid of the one that mostly worked, leaving no replacement. The closest you can get is to spam surrogateescape everywhere, which is both incorrect and has significant performance cost - and that still leaves several APIs unavailable.

    Simply removing str indexing would’ve fixed the common user mistake if that was really desirable. It’s not like unicode indexing is meaningful either, and now large amounts of historical data can no longer be accessed from Python.



  • o11ctoProgrammer HumorWhat's your most obscure binding?
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Unfortunately both of those are used in common English or computer words. The only letter pairs not used are: bq, bx, cf, cj, dx, fq, fx, fz, hx, jb, jc, jf, jg, jq, jv, jx, jz, kq, kz, mx, px, qc, qd, qg, qh, qj, qk, ql, qm, qn, qp, qq, qr, qt, qv, qx, qy, qz, sx, tx, vb, vc, vf, vj, vm, vq, vw, vx, wq, wx, xj, zx.

    Personally I have mappings based on <CR>, and press it twice to get a real newline.


  • The problem is that there’s a severe hole in the ABCs: there is no distinction between “container whose elements are mutable” and “container whose elements and size are mutable”.

    (related, there’s no distinction for supporting slice operations or not, e.g. deque)


    1. Sure, there are libraries and tools for some parts. But the question is: do they actually add value, or do they subtract it due to the cost it imposes to integrate them? There are only a couple parts where it is likely worth it (but even there, you’ll need to understand what they’re doing): the UDP reliability layer, the encryption layer, and possibly the event loop (but I say that mostly because io_uring is weird; previously I wouldn’t include this. On the client side it isn’t needed, but that depends on how much code you share between server and client). Packet framing and serialization are really easy to do yourself and most existing tools (which usually do generate code anyway) have weird limitations or overhead.
    2. “should do more” is a general rule to prevent easy DoS attacks. It can mean packet sizes, it can mean computation, it can mean hard-coded timer delays, it can mean all sorts of things. Don’t make it easy for a malicious client to waste shared resources. This is mostly relevant for early in the connection … but keep in mind that it’s possible for a malicious client to spy on a legitimate client and try to take over - that is, connections aren’t actually a real thing (this applies even for TCP).