• @Dazawassa
    link
    English
    151 year ago

    One of my lecturers mentioned a way they would get around this was to store all values as ints and then append a . two character before the final one.

    • @[email protected]
      link
      fedilink
      English
      151 year ago

      Yeah, this works especially well for currencies (effectively doing all calculations in cents/pennies), as you do need perfect precision throughout the calculations, but the final results gets rounded to two-digit-precision anyways.

      • @[email protected]
        link
        fedilink
        English
        -11 year ago

        quite a horrible hack, most modern languages have decimal type that handles floating rounding. And if not, you should just use rounding functions to two digits with currency.

        • @em7
          link
          English
          21
          edit-2
          1 year ago

          Not sure what financing applications you develop. But what you suggest wouldn’t pass a code review in any financial-related project I saw.

          Using integers for currency-related calculations and formatting the output is no dirty hack, it’s industry standard because floating-point arithmetic is, on contemporary hardware, never precise (can’t be, see https://en.wikipedia.org/wiki/IEEE_754 ) whereas integer arithmetic (or integers used to represent fixed-point arithmetic) always has the same level of precision across all the range it can represent. You typically don’t want to round the numbers you work with, you need to round the result ;-) .

          • @[email protected]
            link
            fedilink
            English
            71 year ago

            Phew. Sometimes I read things and think I’m going crazy. I work in ERP/accounting software and was sure the monetary data type I’ve been using was backed by integers, but the post you’re replying to had me second guessing myself…

        • @[email protected]
          link
          fedilink
          English
          21 year ago

          Had to think about it, but yeah, I guess, you can’t do division or non-integer multiplication with integer cents, as standard integer math always rounds downwards and it forces you to round after every step.
          You could convert to a float for the division/multiplication and you do get more efficient addition/subtraction as well as simpler de-/serialization, but in most situations, it’s probably less trouble to use decimals.

          • @nous
            link
            English
            41 year ago

            You do not want to use floats for any part of calculating money. The larger the value the larger the error in them - not a trait you want when dealing with money. Fixed point numbers/decimals/big ints are much better for this, if you want greater than cent precision, treat the values as fractions of a cent (aka move the arbitrary decimal over one more place or however many you need for your application). The maths is the same no matter where you place the decimal point in it.

    • @[email protected]
      link
      fedilink
      English
      -51 year ago

      Fixed point notation. Before floats were invented, that was the standard way of doing it. You needed to keep your equation within certain boundaries.