Second representation requires at least 6 bits to represent numbers between 0 to 59. But 5 bits are not just enough - 25 = 32, which can only represent from 0 up to 31 seconds.

According to K.N. King:

You may be wondering how it 's possible to store the seconds - a number between 0 and 59 in a field with only 5 bits. Well. DOS cheats: it divides the number of seconds by 2, so the seconds member is actually between 0 and 29.

That really makes no sense?

  • FigMcLargeHuge@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    If you have 32 5 bits to work with you keep one bit to determine which half of the 30 seconds you are on. If that bit is a 0, you are counting 1-30 and if that bit is 1, then you are counting 31-60.

    • LalSalaamComrade@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      But during the release of MS-DOS, there were only 16-bit microprocessors, right? 32-bit x86 processors came way later around 1985, I think?

      • FigMcLargeHuge@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I don’t know the specifics, but why would 5 bits be a problem on a 16bit machine? Shit, my mistake, I should have said ‘If you have 5 bits to work with’. I will correct it.

  • MachineFab812@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Sounds like DOS doesn’t keep time in increments any smaller than 2-second intervals. Double your 0-to-29 value whenever asked to provide time with seconds. Done.

    Note: this is off the top of my head, with no in-depth knowlege of actual DOS time-keeping beyond that provided in the OP. I’m interested to see how many versions DOS went through with this time-keeping method, and what value any of this provides beyond querying the system’s Real Time Clock.

    • LalSalaamComrade@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This seems to be the likely answer. I’m assuming that it has something to do with the technological limitation of 16 bit. 1981 saw the first 32-bit non-x86 microprocessor (Intel iAPX 432), and MS-DOS was released for 16-bit in mind, like the 8086. Perhaps, the highest size of the integer was limited to 16 bit as well, and with that in mind, they had to make sure to create a non-padded struct for time, in which the hour were allotted 5 bits (= 32 ≈ 24hour), minutes were allotted 6 bits (= 64 ≈ 60mins). The last remaining 5 bits were assigned to seconds, and with the remaining bit-fields, the best precision they could come up with gave a 2-second interval. Is that a fair reasoning?

      • Thorry84@feddit.nl
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Actually versions of MS-DOS were released for the MSX platform, which had a 8-bit Zilog Z80 CPU.

        The number of bits mentioned when referring to processors usually refer to the size of the internal registers. You’ll find that it doesn’t actually matter how big the internal registers are. This just matters for the number of bits possible to process at the same time. So in order to process more bits, it just takes more steps, but it isn’t impossible.

      • MachineFab812@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Could do only-odd numbers if you wanted to be squirrely about it, but I think most people would get more inquisitive upon never seeing a zero.

  • jdnewmil@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    There is an implied 6th bit that is zero. Timestamps have a two-second minimum resolution.

    • davel [he/him]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      That makes sense. Presumably the missing bit is the least significant one, and DOS is rounding them and storing only even-numbered seconds.