Drexel dragonThe Math ForumDonate to the Math Forum

Ask Dr. Math - Questions and Answers from our Archives
_____________________________________________
Associated Topics || Dr. Math Home || Search Dr. Math
_____________________________________________

Linear Interpolation Methods to Approximate Roots


Date: 12/25/2000 at 07:06:34
From: Threepwood
Subject: Approximation of roots

While combing the Web for resources on linear interpolation methods of 
approximating non-linear equation roots, I noticed several different 
criteria for ending the iteration.

For interval [xa,xb] new endpoint [xc]

(1) end when xa~=xb (approximately equal) up to required dp/sf

(2) end when xc shows no change up to a certain tolerance or precision

(3) end when f(xc)~=0

Unless I'm mistaken, in the linear interpolation formula

     xc = xb-yx((xb-xa)/(yb-ya))

for many functions, one endpoint will remain stationary (i.e. not 
moving at all) while the other endpoint shifts towards the root. Thus, 
criterion (1) will never be reached. Am I correct in assuming that 
criterion (1) is more or less unacceptable?

Criteria (2) and (3) appear to be synonymous to me. Both criteria will 
be fulfilled at roughly the same time in the process. Thus, of these 
three, which is the best method for knowing when the best 
approximation to a certain precision has been found?

For the Newton-Raphson method, I've found two general derivations/ 
explanations. One uses geometry, where the tangent to the first guess 
is shown to cut even closer to the root, and the tangent to that cuts 
even closer, and so on. Another begins with a Taylor's expansion of 
f(x).

     f(x) = f(0) + f'(0)(x-x0)

where x0 is the initial guess. Letting f(x) = 0, we get:

     x1 = x0 + f(x0)/f'(x0)

Which explanation is more accurate, if that is so? Or are there better 
ways to derive this?

Thanks.


Date: 12/26/2000 at 20:57:39
From: Doctor Fenton
Subject: Re: Approximation of roots

Dear Threepwood,

You've asked some very good questions.  I'm not an expert on numerical 
analysis, but I think I might shed some light on the issues you raise.

First of all, when you say "linear interpolation" for finding roots, 
there are actually two methods that use linear interpolation: "False 
Position" (or, in the Latin wording, "Regula Falsi"), and the Secant 
Method. False Position always brackets the root. That is, at each 
stage, the method produces two numbers a_n and b_n, so that the root r 
is between them: a_n < r < b_n. At the next stage, it replaces one of 
the numbers a_n and b_n with a new value and leaves the other value 
unchanged, and the new values a_(n+1) and b_(n+1) still bracket the 
root. The Secant Method produces a sequence of numbers that do not 
necessarily bracket the root.

Your first criterion might be more useful for the False Position 
method, but you are correct that for certain functions, it will tend 
to be only one endpoint that moves, and the method will converge very 
slowly, by that criterion.

Then second and third criteria would be more appropriate for the 
Secant Method, but again, there are functions for which the second 
criterion will produce very slow convergence.

The third criterion can be useful, but it depends upon whether you are 
primarily interested in a value that makes the function small, or 
whether you really need the correct value of the (exact) zero. If the 
root is a multiple one, so that the function is roughly quadratic or 
cubic in (x-r), where r is the exact root, then the function will be 
very flat near the root. For example, x might be .01 units from r, but 
f(x)~10^(-6), so relatively poor approximations of the root r will 
produce very small function values.

So I would say that there is no "universal" criterion. The best one to 
use depends upon what you are trying to find: a good approximation to 
the root, or just a value x which produces a small f(x).

Also, unless you are studying the question just to explore the 
question of how well linear interpolation methods find roots, it 
should be clear that these aren't particularly great methods. If you 
really have a practical problem, and not just an exercise in numerical 
analysis, you would be better off using a more sophisticated 
root-finding method, such as the 'Van Wijngaarden-Dekker-Brent 
Method', which is explained (along with source code) in "Numerical 
Recipes" by Press, et al.

The two methods of deriving Newton-Raphson you describe are the same: 
the Taylor series truncated at the first term just gives the tangent, 
which is what your "geometric" method uses. This is a very powerful 
method, provided one starts sufficiently near the root, so that the 
graph of f is monotone, and preferably of the same concavity near the 
root (which isn't possible if the root is also a point of inflection). 
As with the linear methods, Newton-Raphson can fail (often 
spectacularly) for some functions, if one doesn't start near enough to 
the root.

I hope this is of some help. I'd recommend reading the chapter on 
root-finding in _Numerical Recipes_, which is a good book, especially 
for outlining the main ideas and presenting generally very good 
methods.

If you have further questions, please write us again.

- Doctor Fenton, The Math Forum
  http://mathforum.org/dr.math/   
    
Associated Topics:
College Analysis
College Calculus

Search the Dr. Math Library:


Find items containing (put spaces between keywords):
 
Click only once for faster results:

[ Choose "whole words" when searching for a word like age.]

all keywords, in any order at least one, that exact phrase
parts of words whole words

Submit your own question to Dr. Math

[Privacy Policy] [Terms of Use]

_____________________________________
Math Forum Home || Math Library || Quick Reference || Math Forum Search
_____________________________________

Ask Dr. MathTM
© 1994-2013 The Math Forum
http://mathforum.org/dr.math/