
Re: bcd arithmetic
Posted:
Jun 13, 1996 6:36 PM


In article <4pmtun$uto@news.microsoft.com> acnadc@microsoft.com (Dann Corbit) writes:
In article <4pllmk$h3@goanna.cs.rmit.EDU.AU>, rav@goanna.cs.rmit.EDU.AU says... >Definitely not. Before floatingpoint hardware, there was >only integer. Computations can be done in integer.
True enough, any floating point calculation can be expressed in integer terms, but a new set of equally difficult problems arises. For instance, 1/3 gives TLOSS from a numerical standpoint in integer math. So calculations have to be analyzed very carfully to ensure that truncation error does not become dominant. And for a bank that compounds interst continuously at 7.3%, what is my interest at 12:32PM on March 14th? True enough, it is possible to do any sort of calculation using only integers, but it is not necessarily easy to get there.
If you're using "floating point" as a generic term having _nothing_ to do with the way languages like C and Fortran use them, and the way basically all computers (certainly all used by MS software) use them, then you're basically right. _Decimal_ floatingpoint is a basic necessity for some of the kinds of problems you are talking about  but _decimal_ floatingpoint is, usually, best implemented using either binary _integer_ or binarycoded _decimal_ arithmetic, NOT _binary_ floatingpoint arithmetic (which is the kind you actually mean if you're talking C, Fortran, computer hardware, and so on).
(Maybe you're advocating representing decimal floatingpoint via binarycoded decimal instead of binary  that can make things like I/O faster, slowing down intermediate computation in most cases, but it still doesn't in any way solve the general kinds of problems you outline above. And, anything done in BCD can be done in integer, normally using less memory, sometimes less CPU time, otherwise the implementations offer basically equivalent behavior to the highlevel code using them.)
If you think that using binary floatingpoint solves _any_ of the problems you show, you're _dangerously_ wrong. (I hope Microsoft is not employing you as a programmer  I get the impression you don't know what you're talking about.) _No_ floatingpoint format in existence can handle irrationals with perfect precision  even decimal floatingpoint buys you only the ability to precisely represent _decimal_ fractions (e.g. 7.3, but not 1/3).
Floatingpoint on modern computers is basically modeled by the following expression:
the value of a FP number, which has components (sign, exponent, fraction), is:
sign * (fraction * 2**exponent)
where sign is either 1 or 1, fraction is a nonnegative _integer_, and exponent is an _integer_.
It's the "2**exponent" part that uniquely identifies the context as _binary_ floatingpoint.
(We often model this a different way for _specific_ FP formats, denoting fraction as a value ranging from, e.g., 0 through .5 or 0 through 1.0, but, _fundamentally_, it is simply an _integral_ value that is shifting around one way or another, depending on implementation details. And, those ranges are partial, they should be written [0,.5) and [0,1.0) or something like that.)
Your "point" about 7.3% interest is particularly troublesome. Which do you think gives you more accuracy, C code like:
float interest = 7.3; /* or even `double' */
or like:
int interest_times_100 = 730; /* or gcc's `long long int' */
The answer is, the latter. You _cannot_ represent 7.3 in any binary computers' floatingpoint notation  unless you bias it by, e.g., a multiple of a power of 10, in which case you might as well use integer and get the increased range (while still maintaining perfect unit precision).
Floatingpoint on all computers is merely an _approximation_ of a number, regardless of whether that number is an integer or not.
These approximations are perfectly accurate for a limited range of integers (and for a limited set of fractions, such as .5, .25, and so on), but they are quite _inaccurate_ (correct within only a degree of precision) for many, many numbers, especially ones like 7.3 (as in your interest figure) or 2147483647 (this is a popular number, it is 2**31  1; it cannot be precisely represented using a 32bit floatingpoint format, though it can with a 64bit one).
So, there is NO WAY to represent 1/3 as a floatingpoint number. 1/2 happens to work on most FP representations, but some IBM mainframes can't handle either 1/2 or 1/4 (I forget which), because they're based on base16 exponentiation (from what I've read).
Any financial software implemented using binary floatingpoint arithmetic is almost certainly badly designed, and should be considered broken.
Note: I realize you are just a temp at Microsoft. So I don't exactly hold them responsible for your views, and you might not be at all involved with financial software, but it worries me to think that you might be  not because I use MS software [basically, I don't], but because some of the people who manage my money probably do.
In any case, though you are just a temp, it still looks real bad for Microsoft that you post stuff without researching it, especially on topics highly pertinent to Microsoft's reputation. Despite my ability to avoid using Microsoft software, I do have two relatives working there, one fulltime, the other as a consultant, so I'd rather you not hurt MS too much. ;)
Apologies if you really _do_ understand software engineer visavis financial software, you were referring to something else entirely, and I just missed the whole point by coming into the discussion late. 
"Practice random senselessness and act kind of beautiful." James Craig Burley, Software Craftsperson burley@gnu.ai.mit.edu

