Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Math Topics » alt.math.recreational.independent

Topic: Logarithm Algorithm Questions
Replies: 2   Last Post: Apr 18, 2005 6:18 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View  
Stan Liou

Posts: 47
Registered: 2/4/05
Re: Logarithm Algorithm Questions
Posted: Apr 18, 2005 6:18 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On Mon, 18 Apr 2005 18:28:30 +0000 (UTC), nospam@nospam.com
(Paul Ciszek) wrote:

>In article <6jd061d5mhp3r39svab5kah472vmmpq3rl@4ax.com>,
>Stan Liou <stanliou@yahoo.com> wrote:

>>
>> ... and raising of
>>a number (e, precomputed) to an integer power can be
>>done very efficiently on a binary computer, via repeated
>>squaring or some other method.

>
>Yes, that occured to me. But is that a *good* way to do it?
>i.e., does multiplying e by itself 700 times, even with
>repeated squaring to reduce the number of operations, introduce
>too much error?


Good question, and the short answer is yes, but not fatally
so. While the absolute error grows, appropriately enough,
exponentially, what we are really interested in the relative
error, since any answer we obtain would be normalized to some
predetermined precision anyway (24 or 53 bits of precision in
standard IEEE-754 floating-point).

Let's say the difference between the actual have is e, and
the precomputed one is p, with e = p+t, t>0 being some error
term (this can be repeated for t<0, with some differences).
The absolute error is E = [p^n - e^n]/e^n = (1+t/e)^n - 1.

If the floating point mantissa is N bits, then this error
must be less than 2^{-N}, which, after some algebraic
manipulation, can be rearranged to:
(1+(t/e)) < [1+2^{-N}]^(1/n).

Thus, instead of the O(a^n) exponential growth of the
absolute error, the relative error only grows as O(a^(1/n)),
which is much more tractable. If one knows beforehand how
much precision one needs for the output and the range of
possible arguments (in IEEE double-precision, the argument
for the exponential cannot be greater than about 710, since
the result would be outside the range of representable
numbers if that was the case), it is possible to calculate
how many additional bits of precision one needs in order to
make the output conform to the given specification, and
simply do the calculation with intermediate results in
higher precision than the final one. Even for functions
implemented by software instead of hardware instructions,
there is a certain amount of leeway, since the floating-
point numbers are stored in 80 bits on the processor, even
though memory exchanges are only done with 64 (or 32,
in single-precision) bits.

>Which is also how I have planned on making the natural log
>work. My interest in the base 2 log was only if in some
>way it is much easier than natural log. The two functions
>differ from each other by only a multaplicative constant.
>
>My original scheme for ln(x) went as follows:
>
>Store the exponent of x in integer E and the mantissa in
>floating point M.
>If M>(4/3), then increment E and let M=M/2.
>Q = 1-M
>Use Taylor series for ln(1-Q); since Q is between -1/3 and 1/3,
>it should converge very quickly.
>Take the result of the Taylor series and add E*ln(2)[precalculated].
>Return this number as ln(x).


I tried improving it by computing the Euler transform
of the Maclaurin series of log(1+x), and got
log(1+x) = Sum{k>0}[ (1-(1-x)^k)/(k*2^k) ],
or, with a translation of the argument,
log(2-x) = Sum{k>0}[ (1-x^k)/(k*2^k) ],
but then I realized that your algorithm is better
still, since it is only concerned with the range
Q in [-1/3,1/3], effectively "introducing" a factor
of 3^k (or better, if |Q|<<1/3) in the denominator.

I suppose one could do another Euler transform,
and precompute the resulting truncated series, but
that seems more trouble than it's worth. There is
another approach that is superior, however, if one
is willing to precompute coefficients. For Q in the
same range and the error within 1e-15, the Maclaurin
series gives log(Q) = AQ*P27(Q), where P27(Q) is
a 27th degree polynomial in Q, and A is a constant.
This is not all that good. Taking the continued fraction
x 1^2x 1^2x n^2x n^2x
log(1+x) = --- ---- ---- ... ------- ------- ...
1+ 2+ 3+ (2n-1)+ (2n)+
and truncating it (with the constant term in the
denominator), one can obtain log(Q) = AQ*P7(Q)/R7(Q),
where P7 and R7 are seventh-degree polynomials, still
with the same error bound (1e-15). [Adding more
terms does not significantly improve the difference
between it and the hardware-computed value; the
roundoff error catches up right about here.] There
is a somewhat nicer continued fraction for log of
(1+x)/(1-x).

The Maclaurin series should be better in the long
run (for stricter and stricter precisions), simply
because its error term is exponential with respect
to the number of terms in the truncation [O(1/3^n)],
but the continued fraction seems to be a strong
contender for the kind of precision actually used
in computing. I wonder if there is anything better
still.

>In the absence of a magic algorithm, I suspect that log2(x)
>is calculated by by finding the natural log of M by the method
>I described, multiplying that by (1/ln(2))[precalculated], and
>adding E.


If there is something substantially different and superior
for log2, I neither know nor am able to conjure one, so
I suspect your suspitions are correct.

>And here I thought it was easier to calculate by taking
>the value for the previous term, multiplying by x, and
>dividing by N. However, I suppose that does introduce
>too many divisions.


It depends on the computer. Historically, floating-point
divisions were much more intensive in terms of processor
time than multiplications, so it made quite a lot of sense
to pre-compute those coefficients. Modern processors do
not have such long division times, but if one really wants
to squeeze clock cycles, I suppose it would still be
advantageous to some degree. Of course, the newer
processors do not need a software implementation of such
an ordinary function in the first place, because it is
already implemented on the hardware.


---
Stan Liou



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.