Date: 08/20/2003 at 03:09:19 From: Ricky Wu Subject: Lagrange interpolation Dear Dr. Math, I am trying to learn Lagrange interpolation. 1. Could you please give me a detailed explanation of Lagrange interpolation? 2. Could you tell me is there any theory that underlies the method?
Date: 08/20/2003 at 19:44:02 From: Doctor Fenton Subject: Re: Lagrange interpolation Hi Ricky, Thanks for writing to Dr. Math. The idea of interpolation is to find a function of a specified form which passes through a given list of points. Lagrange interpolation uses polynomials. You should already know that given two points (x1,y1) and (x2,y2), there is exactly one line through the two points. It is also true that given three points, (x1,y1), (x2,y2), and (x3,y3), with different x-coordinates, either the three points lie in a line, or else there is exactly one parabola (second degree polynomial y = ax^2 + bx + c) through the three points. If the three points lie in a line, then there is a line y=bx+c passing through the points. We can say that in any case, there is a polynomial of degree at most 2 passing through the three points. In general, given a list of n points, (x(1),y(1)),(x(2),y(2)),..., (x(n),y(n)), with all the x-coordinates different, there will be a polynomial of degree at most (n-1) passing through the given points. If you have four points, there will be a cubic (at most) polynomial through them; if you have 10 points, there will be a polynomial of degree at most 9 through them. You can see that this is the case by using the following kind of argument: in fact, you can write an explicit formula for the polynomial. First, look at the line through two points (x1,y1) and (x2,y2). You probably know the two-point formula for a line, but let us take another approach. The polynomial P(x) = x-x2 is a first degree polynomial which is 0 at x2: P(x2) = 0. P(x1) = x1 - x2, so the polynomial x - x2 P1(x) = ------ x1-x2 is a first degree polynomial which has the value 1 at x1 and 0 at x2. Similarly, x - x1 P2(x) = ------ has P2(x1) = 0 and P2(x2) = 1. x2-x1 Then P(x) = y1*P1(x) + y2*P2(x) is again a polynomial of degree 1 (the y's are just numbers), and P(x1) = y1*P1(x1) + y2*P2(x1) = y1*1 + y2*0 = y1 , while P(x2) = y1*P1(x2) + y2*P2(x2) = y2 . Given three points (x1,y1), (x2,y2), and (x3,y3), we can find a quadratic polynomial P1(x) which is equal to 1 at x1 and equal to 0 at x2 and x3. The Factor Theorem says that a polynomial P(x) is divisible by a linear factor (x-r) if and only if P(r) = 0, that is, if r is a root of P(x)=0. This means that P1(x), since P1(x2)=0 and P1(x3)=0, must be of the form P1(x) = A1(x-x2)(x-x3) . Since P1(x1) = 1 , then 1 = A(x1-x2)(x1-x3) , which tells us that 1 A = -------------- , (x1-x2)(x1-x3) so (x-x2)(x-x3) P1(x) = -------------- . (x1-x2)(x1-x3) Similarly, we can find quadratic polynomials P2(x) and P3(x) such that P2(x1) = 0 , P2(x2) = 1, P2(x3) = 0 , and P3(x1) = 0 , P2(x2) = 0, P2(x3) = 1 . In fact, (x-x1)(x-x3) (x-x1)(x-x2) P2(x) = -------------- and P3(x) = -------------- . (x2-x1)(x2-x3) (x3-x1)(x3-x2) Then P(x) = y1*P1(x) + y2*P2(x) + y3*P3(x) is a polynomial of degree at most 3 which passes through (x1,y1), (x2,y2), and (x3,y3). The same idea works for any list of points (x(1),y(1)), (x(2),y(2)), ..., (x(n),y(n)) . This gives you a formula for the polynomial, called the Lagrange Form of the Interpolating Polynomial. There are other possible forms, but the Lagrange form is probably the simplest. See also: Lagrange Polynomial Interpolation http://mathforum.org/library/drmath/view/53057.html and Quadratic Interpolation http://mathforum.org/library/drmath/view/62552.html If you have any questions, please write back and I will try to explain further. - Doctor Fenton, The Math Forum http://mathforum.org/dr.math/
Date: 08/22/2003 at 04:46:06 From: Ricky Wu Subject: Lagrange interpolation To Dr. Math, 1) I have seen two things in the textbook called Lagrange remainder and Lagrange multipliers. Do they relate to Lagrange interpolation? 2) Could you please show me how to use Excel to work Lagrange interpolation? (If there is any difficulties to show the steps in the computer, could you please recommend a book to me please.) 3) Is there any theory which underlies the method? Regards, Ricky Wu
Date: 08/22/2003 at 08:23:23 From: Doctor Fenton Subject: Re: Lagrange interpolation Hi Ricky, I'm not sure what you mean by "Lagrange remainder." There is a term "Lagrange form of the remainder" used in Taylor polynomials, and if that is what you mean, then the answer is no in both cases. Taylor polynomials do compute a polynomial that in many cases does approximate the function, but it uses the value of the function and its derivatives at one point, not the value of the function at many points. Lagrange multipliers are used in "constrained optimization": you want to find the largest value of a function of several values, e.g. f(x,y), on a curve g(x,y)=0. This does not involve interpolation. Lagrange was a very great and prolific mathematician, and he did a lot of work, so you see his name in many different areas of mathematics. For actual computation, I would use one of the other forms I mentioned in my reply, the Newton form. You have to compute what are called "divided differences". Given a list of points, x(0),x(1),...,x(n), and function values f(x(i)) at each of those points, the divided differences f[x(0),x(1),...,x(k)] are defined recursively: f[x(i)] = f(x(i)) (note the square bracket on the left side; that quantity is being DEFINED). Second divided differences are defined by f[x(1)] - f[x(0)] f[x(0),x(1)] = ----------------- , etc. x(1) - x(0) f(x(1)) - f(x(0)) = ----------------- x(1) - x(0) . Third divided differences are defined by f[x(1),x(2)] - f[x(0),x(1)] f[x(0),x(1),x(2)] = ---------------------------- , etc. x(2) - x(0) that is, third divided differences are divided differences of second divided differences. You make a Divided Difference Table: x(0) f[x(0)] f[x(0),x(1)] x(1) f[x(1)] f[x(0),x(1),x(2)] f[x(1),x(2)] f[x(0),x(1),x(2),x(3)] x(2) f[x(2)] f[x(0),x(1),x(2)] f[x(2),x(3)] x(3) f[x(3)] This table would work for cubic interpolation through 4 points. As an example, I'll do one for a quadratic through 3 points: you first compute the first column of divided differences (which will be the third column in the table: 0 1 0 1 (3-1)/(1-0) 2 1 3 or 1 3 (7-3)/(2-1) 4 2 7 2 7 and then compute the second column of divided differences 0 1 0 1 2 2 1 3 (4-2)/(2-0) or 1 3 1 4 4 2 7 2 7 . Starting in the second column, we read the coefficients down the diagonal: 1,2,1 0 (1) (2) 1 3 (1) 4 2 7 . Then the interpolating polynomial is p(x)=f[x(0)]+f[x(0),x(1)](x-x(0))+f[x(0),x(1),x(2)](x-x(0))(x-x(1)) or p(x) = 1 + 2(x-0) + 1(x-0)(x-1) = 1 + 2x + x(x-1) = 1 + 2x + x^2 - x = 1 + x + x^2 . You can check that this polynomial does indeed produce the values you were interpolating: p(0)=1, p(1)=3, and p(2)=7. The same basic idea works for any degree. A good description of all this can be found in any elementary numerical analysis book. My favorite is Conte and De Boor, _Elementary Numerical Analysis_. It is probably not a good idea to use very high degree polynomials, so if you have 20 points, don't use a 19th degree interpolating polynomial. Instead, use some form of "piecewise polynomial" interpolation: for example, to interpolate a value at x, pick the four closest points in the given list to x, x(i-1),x(i),x(i+1),x(1+2), and use a cubic through these four points. I'm not sure what kind of "theory" you are looking for. I have outlined the proof that given a list of (n+1) distinct x values, there is a unique polynomial of degree n through those points: I essentially wrote the formula for it. The remaining theory deals mostly with the question of how well the interpolating polynomial fits the function between the interpolating points. Conte and DeBoor give some attention to that, but there is more information available in advanced texts. There are whole books devoted to the subject of interpolation, and Lagrange interpolation is only one method. - Doctor Fenton, The Math Forum http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2013 The Math Forum