Ever since people mastered counting most of civilization has used a base 10 numbering system thanks to the number of fingers most people have.
The problem
It all good until you have to come up with representations of fractions. Our computers operate in binary so all numbers are represented as 2^x and unfortunately there are only a few fractions that can accurately be represented like that: 0, 0.25, 0.5, 0.75, and 1.
For all other numbers, there will be an error in their representation which usually isn’t a problem since those small amounts will just get rounded away but if you perform a lot of operations those small errors will add up.
Let’s look at just 2 small examples:
Let’s say you are selling an article for $1.10 and a customer wants to buy 12 of them.
A human would calculate this as 12 * $1.10 = $13.20 and would be done with it while a computer would give you 13.200000000000001.
Now that’s not nice! It isn’t terrible either because you can fix it with a simple rounding procedure.
The second example we can have a look at is the following. You have upgraded your manufacturing process and can sell them now for $0.83. Let’s just calculate how much you have saved your customers per article. $1.10 - $0.83 = $0.27. Nice! The computer however thinks that your savings are actually $0.27000000000000013.
This, again, is something we could live without.
Simple solution
Represent all of your money values as cents or minor values of your currency.
Selling 12 items of $1.10 items would be 12 * 110 = 1320 and a simple conversion would bring this back to $13.20. Since you are dealing with integer numbers the entire problem with floats goes away and you are safe from potential miscalculations.
Another benefit in using integers over floats is that they are in general faster for the computer to work with so if you are working on applications that do a lot of calculations, you might see an improvement in this department as well.
Until next time, happy coding