• atheken
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.

    I also think the author handwaves “too many blocking calls end up on the main thread.”

    Hardly. This is like rule zero for building gui apps. Put any non-trivial or blocking work on a background thread. It was harder to do before mainstream languages got good green thread/async support, but it’s almost trivial now.

    I agree that there are still calls that could have variable response times (such as virtual memory being paged in or out), but even low-end machines are RAM-rich and SSDs are damn fast. The kernel is likely also doing some optimization to page stuff in from disk for the foreground app.

    It’s nice to think through the issue, but I don’t think it’s quite as dire as the author claims.

    • lysdexicOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      1 year ago

      The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.

      I don’t think that’s a valid take from the article.

      The whole point of the article is that if a handler from a GUI application runs for too long then the application will noticeably block and degrade the user experience.

      The real time mindset is critical to be mindful of this failure mode: handlers should have a time budget (compute, waiting dor IO, etc), beyond which the user experience degrades.

      The whole point is that GUI applications, just like real-time applications, must be designed with these execution budgets in mind, and once they are not met them the application needs to be redesigned avoid these issues.

      • atheken
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Which is what putting most of this stuff on the background accomplishes. It necessitates designing the UX with appropriate feedback. Sometimes you can’t make things go faster than they go. For example, a web request, or pulling data from an ancient disk that a user is using - you as an author don’t have control over these, the OS doesn’t even have control over them.

        Should software that depends on external resources refuse to run?

        The author is talking about switching to some RTOS due to this, which is extreme. OS vendors have spent decades trying to sort out the “Beachball of Death” issue, that is exceedingly rare on modern systems, due to better multi-tasking support, and dramatically faster hardware.

        Most GUI apps are not hard RT and trying to make them so would be incredibly costly and severely limit other aspects of systems that users regularly prefer (like keeping 100 apps and browser tabs open).

        • lysdexicOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Which is what putting most of this stuff on the background accomplishes.

          The part you’re missing entirely is the complexity that’s hidden behind the weasel word “most”.

          The majority of event handlers from a GUI app do not do anything complex, computationally expensive, or blocking. They do things like setting flags, trigger changes in the UI state (i.e., show/hide/update widgets) bump counters, etc.

          No one in their right mind would ever consider going through the trouble of doing this stuff in separate threads/processes. “Most” handlers run perfectly fine on the main thread.

          Nevertheless software changes, and today’s onClick handler that sets a flag to true/false tomorrow is required to emit a metric event or switch a different treatment depending on the state of a feature flag or A/B test, or is required to write a setting to disk or something like that.

          How do you draw the line in the sand that tells whether this handler should run on the main thread, should trigger a fire-and-forget background task, or should be covered by a dedicated userflow with a complete story board?

          That’s the stuff that’s hand-waved away with weasel words like “most”.

          This blog post delivers a crisp mental model to tell which approach is suitable: follow the real time computing rulebook, acknowledge that each and every handler has a time budget, and if a handler overspends it’s budget them it needs to be refactored.