Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: bcd arithmetic
Replies: 42   Last Post: Jul 10, 1996 11:16 AM

 Messages: [ Previous | Next ]
 Craig Burley Posts: 5 Registered: 12/12/04
Re: bcd arithmetic
Posted: Jun 13, 1996 6:36 PM

In article <4pmtun\$uto@news.microsoft.com> a-cnadc@microsoft.com (Dann Corbit) writes:

In article <4pllmk\$h3@goanna.cs.rmit.EDU.AU>, rav@goanna.cs.rmit.EDU.AU says...
>---Definitely not. Before floating-point hardware, there was
>only integer. Computations can be done in integer.

True enough, any floating point calculation can be expressed in integer terms,
but a new set of equally difficult problems arises. For instance, 1/3 gives
TLOSS from a numerical standpoint in integer math. So calculations have to
be analyzed very carfully to ensure that truncation error does not become
dominant. And for a bank that compounds interst continuously at 7.3%, what
is my interest at 12:32PM on March 14th? True enough, it is possible to do
any sort of calculation using only integers, but it is not necessarily easy
to get there.

If you're using "floating point" as a generic term having _nothing_ to
do with the way languages like C and Fortran use them, and the way
basically all computers (certainly all used by MS software) use them,
then you're basically right. _Decimal_ floating-point is a basic necessity
for some of the kinds of problems you are talking about -- but _decimal_
floating-point is, usually, best implemented using either binary _integer_
or binary-coded _decimal_ arithmetic, NOT _binary_ floating-point
arithmetic (which is the kind you actually mean if you're talking C,
Fortran, computer hardware, and so on).

(Maybe you're advocating representing decimal floating-point via
binary-coded decimal instead of binary -- that can make things like
I/O faster, slowing down intermediate computation in most cases, but
it still doesn't in any way solve the general kinds of problems you
outline above. And, anything done in BCD can be done in integer, normally
using less memory, sometimes less CPU time, otherwise the implementations
offer basically equivalent behavior to the high-level code using them.)

If you think that using binary floating-point solves _any_ of the problems
you show, you're _dangerously_ wrong. (I hope Microsoft is not employing
you as a programmer -- I get the impression you don't know what you're
talking about.) _No_ floating-point format in existence can handle
irrationals with perfect precision -- even decimal floating-point buys
you only the ability to precisely represent _decimal_ fractions (e.g.
7.3, but not 1/3).

Floating-point on modern computers is basically modeled by the following
expression:

the value of a FP number, which has components (sign, exponent, fraction), is:

sign * (fraction * 2**exponent)

where sign is either 1 or -1, fraction is a nonnegative _integer_,
and exponent is an _integer_.

It's the "2**exponent" part that uniquely identifies the context
as _binary_ floating-point.

(We often model this a different way for _specific_ FP formats, denoting
fraction as a value ranging from, e.g., 0 through .5 or 0 through 1.0,
but, _fundamentally_, it is simply an _integral_ value that is shifting
around one way or another, depending on implementation details. And,
those ranges are partial, they should be written [0,.5) and [0,1.0) or
something like that.)

think gives you more accuracy, C code like:

float interest = 7.3; /* or even `double' */

or like:

int interest_times_100 = 730; /* or gcc's `long long int' */

The answer is, the latter. You _cannot_ represent 7.3 in any binary
computers' floating-point notation -- unless you bias it by, e.g., a
multiple of a power of 10, in which case you might as well use integer
and get the increased range (while still maintaining perfect unit
precision).

Floating-point on all computers is merely an _approximation_ of a
number, regardless of whether that number is an integer or not.

These approximations are perfectly accurate for a limited range of
integers (and for a limited set of fractions, such as .5, .25,
and so on), but they are quite _inaccurate_ (correct within only a
degree of precision) for many, many numbers, especially ones like
7.3 (as in your interest figure) or 2147483647 (this is a popular
number, it is 2**31 - 1; it cannot be precisely represented using a
32-bit floating-point format, though it can with a 64-bit one).

So, there is NO WAY to represent 1/3 as a floating-point number.
1/2 happens to work on most FP representations, but some IBM
mainframes can't handle either 1/2 or 1/4 (I forget which), because
they're based on base-16 exponentiation (from what I've read).

Any financial software implemented using binary floating-point arithmetic
is almost certainly badly designed, and should be considered broken.

Note: I realize you are just a temp at Microsoft. So I don't exactly
hold them responsible for your views, and you might not be at all
involved with financial software, but it worries me to think that
you might be -- not because I use MS software [basically, I don't],
but because some of the people who manage my money probably do.

In any case, though you are just a temp, it still looks real bad
for Microsoft that you post stuff without researching it, especially
on topics highly pertinent to Microsoft's reputation. Despite my
ability to avoid using Microsoft software, I do have two relatives
working there, one full-time, the other as a consultant, so I'd
rather you not hurt MS too much. ;-)

Apologies if you really _do_ understand software engineer vis-a-vis
financial software, you were referring to something else entirely,
and I just missed the whole point by coming into the discussion
late.
--

"Practice random senselessness and act kind of beautiful."
James Craig Burley, Software Craftsperson burley@gnu.ai.mit.edu

Date Subject Author
6/13/96 Craig Burley
6/13/96 Dann Corbit
6/14/96 Lawrence Kirby
6/14/96 John Breckenridge
6/18/96 JUD MCCRANIE
6/19/96 Jim Carr
6/14/96 Lee Jaap
6/14/96 Dann Corbit
6/14/96 Lee Jaap
6/15/96 Craig Burley
6/15/96 Scott J. McCaughrin
6/16/96 Craig Burley
6/17/96 schlafly@bbs.cruzio.com
6/17/96 Lawrence Kirby
7/1/96 Andy J Robb
7/3/96 Dik T. Winter
7/5/96 Mike Kent
7/5/96 Lawrence Kirby
7/5/96 Mike Kent
7/10/96 Lawrence Kirby
7/6/96 Jim Carr
6/17/96 Craig Burley
6/19/96 Jim Carr
6/21/96 Lawrence Kirby
6/22/96 Dik T. Winter
6/23/96 Lawrence Kirby
6/24/96 Dik T. Winter
6/25/96 Lawrence Kirby
6/25/96 Jim Carr
6/17/96 David Fenyes
6/20/96 Craig Burley
6/23/96 Lawrence Kirby
6/21/96 Jeff Mullen
6/24/96 Lee Jaap
6/25/96 Lawrence Kirby
6/26/96 Amos Shapir
6/26/96 tofer
6/30/96 Amos Shapir
6/30/96 Dik T. Winter
6/25/96 Vincent R. Johns
7/2/96 John Day
6/26/96 Lee Jaap