>> just my 2 cents >> >> be X the number you want the exp(X) for >> the idea is to reduce the interval for X to [0,ln(2)] >> >> 1) rewrite exp(X) as 2^p * exp(x) >> do this by multiplying X by log_2(e) >> the integer part of this is p. Calculate x by x = X - p ln(2) >> >> so the problem is reduced to an approximation for x in [0,ln(2)] and once >> we have this, multiply by 2^p which can be done quite quickly and without >> error. > > Yes I figured that out already. The Altivec exp estimate can calculate > the 2^p accurately for me. > >> 2) approximate exp(x) in [0,ln(2)] by a minimax-approximation: >> >> the next is a 6th degree polynomial minimax-approximation which has a >> relative error < 10e-8 in the interval [0,ln(2)]. Coefficients are from >> a_0 to a_6 where a_i belongs to x**i. >> >> 1.000000002644271861135635165803 >> 0.999999630676466160313515392503 >> 0.500008415450545865703228178366 >> 0.166594937239279108432157593360 >> 0.041956255273539448094181805064 >> 0.007740059180886275586488149490 >> 0.001973684356346945984956068724 > > That was brilliant! I plugged in the coefficients into the polynomial, > and the results were within 5 ulps of the library exp. (I had weird > rounding errors when I used squaring reduction and Taylor polynomials > and straightforward Horner-style factoring, so you've saved me tearing > the last bits of my hair out.) All within 9 multiplies. >
Some advice: try to eliminate the link in your head between "approximation" and "taylor expansion" and replace it by "approximation" and "orthogonal polynomials" :-) Taylor is good in a small neighbourhood around the point of interest. Orthogonal polynomials are good in quite a large interval to which you can almost always reduce your initial interval to.
> Now to slice off one more multiply by using the Knuthean polynomial > transform suggested by a post in a different part of this thread... > > Qn: what tool did you use to generate the minimax polynomial? > Maple 9.51. The numapprox package, minimax function. The classical algorithm associated with minimax approximations is the Remez algorithm (2nd version of 1934, I don't have the exact reference at hand).
Be carefull with these "addition for multiplication" type of cost reduction. Don't just implement the formula Knuth describes, but also read his error analysis (actually, try reading the whole book The Art of Computer Programming Volume 2 Semi-numerical algorithms, good read and very rich on detailed analysis). I would stick to Horner. The one extra multiply isn't worth the risk of "unexpected" rounding errors.