"Bruno Luong" <email@example.com> wrote in message <firstname.lastname@example.org>... > "SK " <email@example.com> wrote in message <firstname.lastname@example.org>... > > > > All right, suppose I say all integers between -2^52 and 2^52 can be represented exactly - I did say "within some range" without specifying what that range was. > > That's better. > > All integers are sum of powers of 2 (or powers of x, x is whatever you like), so there is no point in involving the basis when discussing integer in floating-poimt coding. The range 2^52 come from the fact that 52 bit is used by the mantissa of double precision IEEE. > > Very often miss-believing occur when user manipulate fractional number such as 0.1. The number 0.1 = 1/10 = 1/(2*5) cannot be represented exactly in binary basis, and human notation for 0.1 happens to be finite because we count in 10 basis. On the other hand 0.1 cannot be codded finitely in binary basis (computer), just like 1/3 cannot be written with finite decimal digit numbers. Therefore non-accuracy occurs in arithmetic operations for decimal fractional numbers. > > Bruno
Thank you for your help.
By the way the following is from the Wikipedia article on IEEE 754 - 1985 (http://en.wikipedia.org/wiki/IEEE_754-1985) "In 1976 Intel began planning to produce a floating point coprocessor. Dr John Palmer, the manager of the effort, persuaded them that they should try to develop a standard for all their floating point operations. William Kahan was hired as a consultant; he had helped improve the accuracy of Hewlett Packard's calculators. Kahan initially recommended that the floating point base be decimal but the hardware design of the coprocessor was too far advanced to make that change."
Perhaps it would have been more "human" for the floating point base to be in decimal. Not sure of any other consequences though.