Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.



Re: Logarithm Algorithm Questions
Posted:
Apr 18, 2005 6:18 PM


On Mon, 18 Apr 2005 18:28:30 +0000 (UTC), nospam@nospam.com (Paul Ciszek) wrote:
>In article <6jd061d5mhp3r39svab5kah472vmmpq3rl@4ax.com>, >Stan Liou <stanliou@yahoo.com> wrote: >> >> ... and raising of >>a number (e, precomputed) to an integer power can be >>done very efficiently on a binary computer, via repeated >>squaring or some other method. > >Yes, that occured to me. But is that a *good* way to do it? >i.e., does multiplying e by itself 700 times, even with >repeated squaring to reduce the number of operations, introduce >too much error?
Good question, and the short answer is yes, but not fatally so. While the absolute error grows, appropriately enough, exponentially, what we are really interested in the relative error, since any answer we obtain would be normalized to some predetermined precision anyway (24 or 53 bits of precision in standard IEEE754 floatingpoint).
Let's say the difference between the actual have is e, and the precomputed one is p, with e = p+t, t>0 being some error term (this can be repeated for t<0, with some differences). The absolute error is E = [p^n  e^n]/e^n = (1+t/e)^n  1.
If the floating point mantissa is N bits, then this error must be less than 2^{N}, which, after some algebraic manipulation, can be rearranged to: (1+(t/e)) < [1+2^{N}]^(1/n).
Thus, instead of the O(a^n) exponential growth of the absolute error, the relative error only grows as O(a^(1/n)), which is much more tractable. If one knows beforehand how much precision one needs for the output and the range of possible arguments (in IEEE doubleprecision, the argument for the exponential cannot be greater than about 710, since the result would be outside the range of representable numbers if that was the case), it is possible to calculate how many additional bits of precision one needs in order to make the output conform to the given specification, and simply do the calculation with intermediate results in higher precision than the final one. Even for functions implemented by software instead of hardware instructions, there is a certain amount of leeway, since the floating point numbers are stored in 80 bits on the processor, even though memory exchanges are only done with 64 (or 32, in singleprecision) bits.
>Which is also how I have planned on making the natural log >work. My interest in the base 2 log was only if in some >way it is much easier than natural log. The two functions >differ from each other by only a multaplicative constant. > >My original scheme for ln(x) went as follows: > >Store the exponent of x in integer E and the mantissa in >floating point M. >If M>(4/3), then increment E and let M=M/2. >Q = 1M >Use Taylor series for ln(1Q); since Q is between 1/3 and 1/3, >it should converge very quickly. >Take the result of the Taylor series and add E*ln(2)[precalculated]. >Return this number as ln(x).
I tried improving it by computing the Euler transform of the Maclaurin series of log(1+x), and got log(1+x) = Sum{k>0}[ (1(1x)^k)/(k*2^k) ], or, with a translation of the argument, log(2x) = Sum{k>0}[ (1x^k)/(k*2^k) ], but then I realized that your algorithm is better still, since it is only concerned with the range Q in [1/3,1/3], effectively "introducing" a factor of 3^k (or better, if Q<<1/3) in the denominator.
I suppose one could do another Euler transform, and precompute the resulting truncated series, but that seems more trouble than it's worth. There is another approach that is superior, however, if one is willing to precompute coefficients. For Q in the same range and the error within 1e15, the Maclaurin series gives log(Q) = AQ*P27(Q), where P27(Q) is a 27th degree polynomial in Q, and A is a constant. This is not all that good. Taking the continued fraction x 1^2x 1^2x n^2x n^2x log(1+x) =    ...   ... 1+ 2+ 3+ (2n1)+ (2n)+ and truncating it (with the constant term in the denominator), one can obtain log(Q) = AQ*P7(Q)/R7(Q), where P7 and R7 are seventhdegree polynomials, still with the same error bound (1e15). [Adding more terms does not significantly improve the difference between it and the hardwarecomputed value; the roundoff error catches up right about here.] There is a somewhat nicer continued fraction for log of (1+x)/(1x).
The Maclaurin series should be better in the long run (for stricter and stricter precisions), simply because its error term is exponential with respect to the number of terms in the truncation [O(1/3^n)], but the continued fraction seems to be a strong contender for the kind of precision actually used in computing. I wonder if there is anything better still.
>In the absence of a magic algorithm, I suspect that log2(x) >is calculated by by finding the natural log of M by the method >I described, multiplying that by (1/ln(2))[precalculated], and >adding E.
If there is something substantially different and superior for log2, I neither know nor am able to conjure one, so I suspect your suspitions are correct.
>And here I thought it was easier to calculate by taking >the value for the previous term, multiplying by x, and >dividing by N. However, I suppose that does introduce >too many divisions.
It depends on the computer. Historically, floatingpoint divisions were much more intensive in terms of processor time than multiplications, so it made quite a lot of sense to precompute those coefficients. Modern processors do not have such long division times, but if one really wants to squeeze clock cycles, I suppose it would still be advantageous to some degree. Of course, the newer processors do not need a software implementation of such an ordinary function in the first place, because it is already implemented on the hardware.
 Stan Liou



