Notice: Undefined index: title in /var/www/html/mathimages/extensions/balloons2/balloon2.php on line 63
Taylor Series - Math Images

Taylor Series

From Math Images

(Difference between revisions)
Jump to: navigation, search
m
Line 161: Line 161:
:<math> f'(x)=1/x </math>, <math> f''(x)=-1/x^2 </math>, <math> f ^{(3)}(x)=2/x^3, \cdots</math> <math> f ^{(k)}(x) = {{(-1)^{k-1} \cdot (k-1)!} \over x^k}</math>
:<math> f'(x)=1/x </math>, <math> f''(x)=-1/x^2 </math>, <math> f ^{(3)}(x)=2/x^3, \cdots</math> <math> f ^{(k)}(x) = {{(-1)^{k-1} \cdot (k-1)!} \over x^k}</math>
<br>
<br>
-
Since this function and its derivatives are undefined at ''x'' = 0, we cannot construct a Maclaurin series (a Taylor series centered at ''x'' = 0) for it. Note that, when choosing ''a'', one must select a value not only for which the derivatives ''f'' <sup>(''k'')</sup>(''a'') are defined but at which they can be evaluated. Centering our Taylor series at ''a'' = 2 would not work because ''f'' <sup>(0)</sup>(2) = log (2) is unknown and, in fact, cannot even be approximated until we have obtained our Taylor series. For the natural log, it makes sense to let ''a'' = 1 and compute the derivatives at this point:
+
Since this function and its derivatives are undefined at ''x'' = 0, we cannot construct a <balloon title="a Taylor series center at ''x'' = 0</balloon> Maclaurin series for it. Note that, when choosing ''a'', one should select a value not only for which all derivatives ''f'' <sup>(''k'')</sup>(''a'') exist but at which they can be evaluated. For instance, centering our Taylor series at ''a'' = 2 would not be helpful because ''f'' <sup>(0)</sup>(2) = log (2) is unknown and, in fact, cannot even be approximated until we have obtained our Taylor series. For the natural log, it makes sense to let ''a'' = 1 and compute the derivatives at this point:
<br><br>
<br><br>
:<math> f(1) = \log 1 = 0</math>, <math> f'(1) = {1 \over 1} = 1</math>, <math> f''(1) = -{ 1 \over 1^2} = -1</math>, <math> f ^{(3)} (1) = {2 \over 1^3} = 2, \cdots</math> <math> f ^{(k)} (1) = {(-1)^{k-1} \cdot (k-1)!}</math>
:<math> f(1) = \log 1 = 0</math>, <math> f'(1) = {1 \over 1} = 1</math>, <math> f''(1) = -{ 1 \over 1^2} = -1</math>, <math> f ^{(3)} (1) = {2 \over 1^3} = 2, \cdots</math> <math> f ^{(k)} (1) = {(-1)^{k-1} \cdot (k-1)!}</math>

Revision as of 14:56, 24 May 2013

{{Image Description Ready |ImageName=Taylor Series |Image=Taylor Main.gif |ImageIntro=
A Taylor series is a power series representation of an infinitely differentiable function. In other words, certain functions, like the trigonometric functions, can be written as the sum of an infinite series. Taylor series, then, provide an alternative method of evaluating those functions.

An nth-degree Taylor polynomial P_n(x) for a function approximates the value of the function around a certain point by evaluating only up to the nth-degree term of the Taylor series. By doing so, we obtain a finite series, which can be summed but will not exactly match the infinite Taylor series. In the animation on the right, successive Taylor polynomials are compared to the actual function y = sin(x) using the following polynomial expansion:


\sin(x) \approx P_n(x) = x - {x^3 \over 3!} + {x^5 \over 5!} - {x^7 \over 7!} + \cdots + (-1)^{{n-1}\over2} {x^n \over n!}


In this example, n varies from 0 to 36. As we can see, as n becomes larger and there are more terms in the Taylor polynomial, the Taylor polynomial comes to "look" more like the original function; it becomes a progressively better approximation of the function sin(x). Since it is impossible to evaluate every term in an infinite series, we settle for using a Taylor polynomial with finite n as an approximation.


In this page, we will focus on how such approximations might be obtained as well as how the error of such approximations might be bounded.


For the math behind this, please go to the More Mathematical Explanation section.

|ImageDescElem=
Taylor series are important because they allow us to compute functions that cannot otherwise be computed by conventional means. While the above Taylor polynomial for the sine function seems complicated and is annoying to evaluate, it is just the sum of terms composed of exponents and factorials, so the Taylor polynomial can be reduced to the basic operations of addition, subtraction, multiplication, and division. We can obtain an approximation by truncating the Taylor series into a finite-degree Taylor polynomial, which we can evaluate reliably.

Readers may, without knowing it, already be familiar with a particular type of Taylor series. Consider, for instance, an infinite geometric series with common ratio x:

{1 \over {1-x}} = 1 + x + x^2 + x^3 + \cdots for -1 < x < 1


The left side of the equation is the formula for the sum of a convergent geometric series. The right side is an infinite power series, so we have the Taylor series for f (x) = {1 \over {1-x}}. The More Mathematical Explanation will provide examples of some other Taylor series, as well as the process for deriving them from the original functions.

Using Taylor series, we can approximate infinitely differentiable functions. For example, imagine that we want to approximate the sum of an infinite geometric series with common ratio x = {1 \over 4}. By our knowledge of infinite geometric series, we know that the sum is  {1 \over {1 - {1 \over 4}}} = {4 \over 3} = 1.333 \cdots . Let's see how the Taylor approximation does:

 {P_2 ({1 \over 4}) =} 1 + {1 \over 4} + \left({1 \over 4}\right)^2 = 1.3125


This second order Taylor polynomial brings us somewhat close to the value of  4 \over 3 we obtained before. Let's observe how adding on another term can improve our estimate:

 {P_3 ({1 \over 4}) =} 1 + {1 \over 4} + \left({1 \over 4}\right)^2 + \left({1 \over 4}\right)^3 = 1.328125


As we would expect, this approximation is closer still to the expected value, but not exact. Adding more terms would improve this accuracy.

At this point, you may be wondering what the use of a Taylor series approximation is if, as in the previous case, a more accurate estimate can actually be made by evaluating the left-hand side. It is important to note that this is not always the case. For instance, a more complicated Taylor series is that of cos(x):

 cos (x) = 1 - {x^2 \over 2!} + {x^4 \over 4!} - {x^6 \over 6!} + \cdots where x is in radians.


In this case, it is easy to select x so that we cannot directly evaluate the left-hand side of the equation. For such functions, making an approximation can be more valuable. For instance, consider:

Figure 1-b3-term-approximation of function y = cos(x)Click for an image of higher resolution
Figure 1-b
3-term-approximation of function y = cos(x)
Click for an image of higher resolution

Figure 1-cThe above approximation zoomed in 2,000 timesClick for an image of higher resolution
Figure 1-c
The above approximation zoomed in 2,000 times
Click for an image of higher resolution


\cos 30^\circ



First we must convert degrees to radians in order to use the Taylor series:

\cos 30^\circ = \cos {\pi \over 6} \approx \cos 0.523599 \cdots



Then, substitute into the Taylor series of cosine above:

\cos (0.523599 rad) = 1 - {0.523599^2 \over 2!} + {0.523599^4 \over 4!} - \cdots



Here we only used 3 terms, since this should be enough to tell us something. Notice that the right side of the equation above involves only the four simple operations, so we can easily calculate its value:

\cos (0.523599 rad) \approx 0.866053 \cdots



On the other hand, trigonometry gives us the exact numerical value of this particular cosine:

\cos 30^\circ = {\sqrt 3 \over 2} \approx 0.866025 \cdots



So our approximating value agrees with the actual value to the fourth decimal, which is good accuracy for a basic approximation. Better accuracy can be achieved by using more terms in the Taylor series.

We can get the same conclusion if we graph the original cosine function and its approximation together as shown in Figure 1-b. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line x = π/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when x grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in Figure 1-c, we can see that there is indeed a difference between these two functions, which we cannot see in a graph of normal scale.

|ImageDesc=

The general form of a Taylor series


In this subsection, we will derive a general formula for a function's Taylor series using the derivatives a general function f(x). Taylor polynomials are defined as follows:

The Taylor polynomial of degree n for f at a, written as P _n (x), is the polynomial that has the same 0th- to nth-order derivatives as function f(x) at point a. In other words, the nth-degree Taylor polynomial must satisfy:


P _n (a) = f (a) (the 0th-order derivative of a function is itself)


P _n ' (a) = f ' (a)


P _n '' (a) = f '' (a)
\vdots
P _n ^{(n)} (a) = f^{(n)} (a)


in which P _n ^{(k)} (a) is the kth-order derivative of both P _n (x) and f (x) at a.


The Taylor series T (x) is the Taylor polynomial for which all derivatives at a are equal to those of  f (x) . Note again that our selection of  f (x) is limited to infinitely differentiable functions; this is why  T (x) must be an infinite series.


The following set of images show some examples of Taylor polynomials, from 0th- to 2nd-order:

Figure 2-a0th degree Taylor Polynomial
Figure 2-a
0th degree Taylor Polynomial

Figure 2-bfirst degree Taylor Polynomial
Figure 2-b
first degree Taylor Polynomial

Figure 2-csecond degree Taylor Polynomial
Figure 2-c
second degree Taylor Polynomial




In order to construct a general formula for a Taylor series, we must start with what we know. Using the definition of a power series, we can write the Taylor series of a function f around a as

Eq. 1         T(x) = a_0 + a_1 (x-a)+ a_2 (x-a)^2 + a_3 (x-a)^3 + \cdots


in which a0, a1, a2 ... are unknown coefficients. Our goal is to find these coefficients. From the definition of Taylor polynomials, we know that function f and Taylor series T(x) must have same derivatives of all degrees:

T(a) = f(a) , T'(a) = f'(a) , T''(a) = f''(a) , T ^{(3)} (a) = f ^{(3)} (a) \cdots


How might we use this fact? Let's attempt to evaluate the first few terms by taking the derivative of our general T(x):

T(a) = f(a) = a_0 + a_1(a-a) + a_2(a-a)^2 + a_3(a-a)^3 + \cdots = a_0


T'(a) = f'(a) = a_1 + 2a_2(a-a) + 3a_3(a-a)^2 + 4a_4(a-a)^3 + \cdots = a_1


T''(a) = f''(a) = 2a_2 + 3 \cdot 2 a_3 (a-a) + 4 \cdot 3 a_4 (a-a)^2 + 5 \cdot 4 a_5 (a-a)^3 + \cdots = 2a_2


T^{(3)}(a) = f^{(3)}(a) = 3 \cdot 2 a_3 + 4 \cdot 3 \cdot 2 a_4 (a-a) + 5 \cdot 4 \cdot 3 a_5 (a-a)^2 + \cdots = 3 \cdot 2 a_3


The pattern may now be recognizable. Because each derivative is evaluated at a, all terms but the constant term go to 0. Note then what happens after k derivatives. We get:

T ^{(k)} (a) = k! \cdot a_k = f ^{(k)}(a)


This step is important in understanding the Taylor series both practically and theoretically. The Taylor series approximates an infinitely differentiable function by exploiting qualities of the differentiability of polynomials. In particular, it can be ensured that every degree of derivatives at a is the same for T(x) as for f(x). The k! is a just a result of the derivations of polynomial terms. From this equation, we easily obtain:

a_k = {f ^{(k)}(a) \over k!}


Since 0! = 1, this formula holds for all non-negative integers n. So, using derivatives, we have obtained an expression for all unknown coefficients of the given function f. Substitute them back into Eq. 1 to get an explicit expression of Taylor series:

Eq. 2         T(x) = f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots


or, in summation notation,

 T(x)=\sum_{k=0} ^ {\infin } \frac {f^{(k)}(a)}{k!} \, (x-a)^{k}


This is the standard formula of Taylor series that we will use throughout the rest of this page. In many cases, it is convenient to let a = 0 to get a neater expression:

Eq. 3         T(x) = f(0)+\frac {f'(0)}{1!} x + \frac{f''(0)}{2!} x^2 + \frac{f^{(3)}(0)}{3!}x^3 + \cdots


Eq. 3 is called the Maclaurin series after Scottish mathematician Colin Maclaurin, who made extensive use of these series in the 18th century.


Finding the Taylor series for a specific function

Many Taylor series can be derived using Eq. 2 by substituting in f and a. Here we will demonstrate this process in detail for the natural logarithm function. Other elementary functions, such as sin(x), cos(x), and ex, can be treated similarly. Their Taylor series will also be provided.

The natural log function is:

f (x) = \log (x)


Its derivatives are:

 f'(x)=1/x ,  f''(x)=-1/x^2 ,  f ^{(3)}(x)=2/x^3, \cdots  f ^{(k)}(x) = {{(-1)^{k-1} \cdot (k-1)!} \over x^k}


Since this function and its derivatives are undefined at x = 0, we cannot construct a Maclaurin series for it. Note that, when choosing ''a'', |InProgress=Yes

|Field=Algebra |InProgress=Yes }}

Personal tools