Computers can create and destroy entire worlds in one second. One second is multiple billions – billions! – of executed instructions. One second is an eternity for a computer.

Yet I sometimes wonder whether one second is the smallest unit of time most programmers think in. Do they know that you can run entire test suites in 1s and not just a single test? Do they know that one second is slow?

Seeing how slow modern software can be, on modern hardware, just makes me sad sometimes. I really feel this person’s pain, including the slow creeping insanity of “how is nobody else noticing/bothered by this”. 😓

  • Tvkan@feddit.de
    link
    fedilink
    arrow-up
    66
    ·
    1 year ago

    complains about losing one second

    literally has an “sign up for my newsletter!”-overlay that appears in front of the article, while you’re reading the article

    • Vlyn@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      1 year ago

      And talks about a time before the internet while he looks what? 30-40 in that image?

      Yes, things are bloated and slow, it’s annoying. But the article didn’t add much or go into the reasons why.

      • Walnut356
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        People in different socioeconomic situations/locations experience new technology at different points in time. Just because the internet existed doesnt mean they (or anyone in their immediate vicinity) had internet, state of the art computers, etc.

    • qwertyasdef
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      It’s a Substack thing, not added by the author

      • abhibeckert@lemmy.world
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Sure but if they chose a better publishing platform that time wasting overlay wouldn’t be there.

        Maybe if the author chose better tools, they wouldn’t have to wait around so much? I don’t have to wait 1 second for a unit test to run for example - and I don’t have particularly fast hardware…

          • snoweMA
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            They only waited half a second before signing up

      • Paradox@lemdro.id
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        The author chose to host on a platform that does that. So it is their fault

    • Gamma
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I was going to say “At least I can click ‘Continue reading’ and it actually goes away immediately” but actually, no. This is still enshittification, I’ve just gotten used to shittier versions of it.

    • zlatko
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yeah, there was a bit of discussion about that on Lobsters :)

  • the_sisko@startrek.website
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    It’s a cathartic, but not particularly productive vent.

    Yes, there are stupid lines of time.sleep(1) written in some tests and codebases. But also, there are test setUp() methods which do expensive work per-test, so that the runtime grew too fast with the number of tests. There are situations where there was a smarter algorithm and the original author said “fuck it” and did the N^2 one. There are container-oriented workflows that take a long time to spin up in order to run the same tests. There are stupid DNS resolution timeouts because you didn’t realize that the third-party library you used would try to connect to an API which is not reachable in your test environment… And the list goes on…

    I feel like it’s the “easy way out” to create some boogeyman, the stupid engineer who writes slow, shitty code. I think it’s far more likely that these issues come about because a capable person wrote software under one set of assumptions, and then the assumptions changed, and now the code is slow because the assumptions were violated. There’s no bad guy here, just people doing their best.

    • zlatko
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I think it is a bit more than that.

      You point out two things:

      • the “fuck it” algorithm
      • the hidden DNS request.

      So, now, obviously if you wrote the “fuck it”, then well, you fix it. If you found the DNS library problem - find a better lib or something.

      But if you take the stance “fuck it, there’s always something”, you don’t even have a chance of finding out. If you had a test suite running 10 seconds, and suddenly it’s up by 10 more, you would notice. If you had tests running for 10 minutes, you would not.

      If you had a webapp or something that always opened “fast”, then suddenly it gets doubly slower, you’ll notice it. But if you already started slow, you won’t notice (or care, or both), when it gets even worse.

      I think that’s the point of the article. If we all dug in and fixed a little bit, eventually we’d have fast apps or tests or whatever. If you accept that things suck, you’ll make it tripply worse. It is a conscious effort to be fast.