Coder Perfect

Double vs. decimal! – Which one should I use and when should I use it? [duplicate]

Problem

In C#, I keep seeing people use doubles. I believe I read elsewhere that doubles can lose precision at times. When should I use a double type and when should I use a decimal type, is my question. Which type is best for calculating money? (i.e., in excess of $100 million)

Asked by Soni Ali

Solution #1

Always use decimal numbers when dealing with money. It’s why it was made in the first place.

Use decimal if numbers must add up correctly or balance. This includes any manual financial storage, calculations, scoring, or other numbers.

Use double for speed if the exact value of numbers isn’t required. This includes computations in graphics, physics, and other physical sciences where a “number of significant digits” is already present.

Answered by David

Solution #2

decimal for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations – basically money.

When you need relative accuracy over drastically different magnitudes (i.e. losing precision in the trailing digits on huge numbers isn’t a concern), double is the way to go – double covers more than 10(+/-300). The best illustration here is scientific computations.

decimal, decimal, decimal

Accept no substitutes.

The most crucial problem is that because double is implemented as a binary fraction, it cannot effectively represent certain decimal fractions (such as 0.1), and its overall number of digits is less (64 vs. 128 for decimal). Finally, financial applications are frequently required to adhere to specified rounding modes (sometimes mandated by law). These are supported by decimal, but not by double.

Answered by Michael Borgwardt

Solution #3

7 digits System.Single / float System. / / / / / / / / / / / / / / 28-29 significant digits System.Decimal / decimal

I’ve been stung by using the wrong type in huge volumes before (a few years ago):

For a float, you’re out of money at $1 million.

A monetary value of 15 digits:

9 trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion However, division and comparisons are more difficult (I’m no specialist in floating point and irrational numbers, as Marc points out). When decimals and doubles are mixed, problems arise:

When should I use a double number rather than a decimal? includes some comparable and more detailed responses.

Using double instead of decimal for monetary purposes is a micro-optimization, at least that’s how I see it.

Answered by Chris S

Solution #4

The decimal system is used to represent accurate values. Approximate values are represented by a double.

USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)

Answered by Ian Boyd

Solution #5

Decimal is used for money. It takes up a little more memory, but it doesn’t have the rounding issues that double does.

Answered by Clement Herreman

Post is based on https://stackoverflow.com/questions/1165761/decimal-vs-double-which-one-should-i-use-and-when