• @[email protected]
    link
    fedilink
    158 months ago

    Does a file lookup really take that long? Id say the trick was to have just plain old html with no bloat and you’re golden.

    • agilob
      link
      English
      298 months ago

      Blog content was stored in memory and it was served with zero-copy to the socket, so yea, it’s way faster. It was before times of php-fpm and opcache that we’re using now. Back then things were deployed and communicated using tcp sockets (tcp to rails, django or php) or reading from a disk, when the best HDDs were 5600rpm, but rare to find on shared hosting.

      • @[email protected]
        link
        fedilink
        68 months ago

        Couldn’t the html be loaded into memory at the beginning of the program and then served whenever? I understand the reading from disk will be slow, but that only happens once in the beginning.

        • MeanEYE
          cake
          link
          fedilink
          38 months ago

          There are plenty of sins people still commit and can commit when it comes to web development. Reading from disk is not the bottleneck. If site is slow most likely it’s not the disk read times, database access or anything similar, but silly code that generates the page. It’s almost always the code generating the page that’s at fault.

    • MeanEYE
      cake
      link
      fedilink
      88 months ago

      The answer is no. The more file is used the longer it sits in kernel filesystem cache. Getting file from cache versus having it in process memory is few function calls away all of which takes few microseconds. Which is negligible in comparison to network latency and other small issues that might be present in the code.

      On few of our services we decided to store client configuration in JSON files on purpose instead of running with some sort of database storage. Accessing config is insanely fast and kernel makes sure that file is cached so when reading the file you always get fast and latest version. That service is currently handling around 100k requests a day, but we’ve had peaks in past that went up to almost a million requests a day.

      Besides when it comes to human interaction and web sites you only need to get first contentful paint within one second. Anything above 1.5s will feel sluggish, but below 1s, it feels instant. That gives you on average around 800ms to send data back. Plenty of time unless you have a dependency nightmare and parse everything all the time.