Taylor Series

From Math Images

(Difference between revisions)
Jump to: navigation, search
Line 131: Line 131:
:<math>T ^{(k)} (a) = k! \cdot a_k = f ^{(k)}(a)</math>
:<math>T ^{(k)} (a) = k! \cdot a_k = f ^{(k)}(a)</math>
This step is important in understanding the Taylor series both practically and theoretically. The Taylor series approximates an infinitely differentiable function by exploiting qualities of the differentiability of polynomials. In particular, it can be ensured that every degree of derivatives at ''a'' is the same for ''T''(''x'') as for ''f''(''x''). The ''k''! is a just a result of the derivations of polynomial terms. Do not for get this. From this equation, we easily obtain:
This step is important in understanding the Taylor series both practically and theoretically. The Taylor series approximates an infinitely differentiable function by exploiting qualities of the differentiability of polynomials. In particular, it can be ensured that every degree of derivatives at ''a'' is the same for ''T''(''x'') as for ''f''(''x''). The ''k''! is a just a result of the derivations of polynomial terms. From this equation, we easily obtain:
:<math>a_k = {f ^{(k)}(a) \over k!}</math>
:<math>a_k = {f ^{(k)}(a) \over k!}</math>

Revision as of 15:30, 24 May 2013

Taylor Series
Field: Algebra
Image Created By: Peng Zhao
Website: Math Images Project

Taylor Series

A Taylor series is a power series representation of an infinitely differentiable function. In other words, certain functions, like the trigonometric functions, can be written as the sum of an infinite series. Taylor series, then, provide an alternative method of evaluating those functions.

An nth-degree Taylor polynomial P_n(x) for a function approximates the value of the function around a certain point by evaluating only up to the nth-degree term of the Taylor series. By doing so, we obtain a finite series, which can be summed but will not exactly match the infinite Taylor series. In the animation on the right, successive Taylor polynomials are compared to the actual function y = sin(x) using the following polynomial expansion:

\sin(x) \approx P_n(x) = x - {x^3 \over 3!} + {x^5 \over 5!} - {x^7 \over 7!} + \cdots + (-1)^{{n-1}\over2} {x^n \over n!}

In this example, n varies from 0 to 36. As we can see, as n becomes larger and there are more terms in the Taylor polynomial, the Taylor polynomial comes to "look" more like the original function; it becomes a progressively better approximation of the function sin(x). Since it is impossible to evaluate every term in an infinite series, we settle for using a Taylor polynomial with finite n as an approximation.

In this page, we will focus on how such approximations might be obtained as well as how the error of such approximations might be bounded.

For the math behind this, please go to the More Mathematical Explanation section.


Basic Description

Taylor series are important because they allow us to compute functions that cannot otherwise be computed by conventional means. While the above Taylor polynomial for the sine function seems complicated and is annoying to evaluate, it is just the sum of terms composed of exponents and factorials, so the Taylor polynomial can be reduced to the basic operations of addition, subtraction, multiplication, and division. We can obtain an approximation by truncating the Taylor series into a finite-degree Taylor polynomial, which we can evaluate reliably.

Readers may, without knowing it, already be familiar with a particular type of Taylor series. Consider, for instance, an infinite geometric series with common ratio x:

{1 \over {1-x}} = 1 + x + x^2 + x^3 + \cdots for -1 < x < 1

The left side of the equation is the formula for the sum of a convergent geometric series. The right side is an infinite power series, so we have the Taylor series for f (x) = {1 \over {1-x}}. The More Mathematical Explanation will provide examples of some other Taylor series, as well as the process for deriving them from the original functions.

Using Taylor series, we can approximate infinitely differentiable functions. For example, imagine that we want to approximate the sum of an infinite geometric series with common ratio x = {1 \over 4}. By our knowledge of infinite geometric series, we know that the sum is  {1 \over {1 - {1 \over 4}}} = {4 \over 3} = 1.333 \cdots . Let's see how the Taylor approximation does:

 {P_2 ({1 \over 4}) =} 1 + {1 \over 4} + \left({1 \over 4}\right)^2 = 1.3125

This second order Taylor polynomial brings us somewhat close to the value of  4 \over 3 we obtained before. Let's observe how adding on another term can improve our estimate:

 {P_3 ({1 \over 4}) =} 1 + {1 \over 4} + \left({1 \over 4}\right)^2 + \left({1 \over 4}\right)^3 = 1.328125

As we would expect, this approximation is closer still to the expected value, but not exact. Adding more terms would improve this accuracy.

At this point, you may be wondering what the use of a Taylor series approximation is if, as in the previous case, a more accurate estimate can actually be made by evaluating the left-hand side. It is important to note that this is not always the case. For instance, a more complicated Taylor series is that of cos(x):

 cos (x) = 1 - {x^2 \over 2!} + {x^4 \over 4!} - {x^6 \over 6!} + \cdots where x is in radians.

In this case, it is easy to select x so that we cannot directly evaluate the left-hand side of the equation. For such functions, making an approximation can be more valuable. For instance, consider:

Figure 1-b3-term-approximation of function y = cos(x)Click for an image of higher resolution
Figure 1-b
3-term-approximation of function y = cos(x)
Click for an image of higher resolution

Figure 1-cThe above approximation zoomed in 2,000 timesClick for an image of higher resolution
Figure 1-c
The above approximation zoomed in 2,000 times
Click for an image of higher resolution

\cos 30^\circ

First we must convert degrees to radians in order to use the Taylor series:

\cos 30^\circ = \cos {\pi \over 6} \approx \cos 0.523599 \cdots

Then, substitute into the Taylor series of cosine above:

\cos (0.523599 rad) = 1 - {0.523599^2 \over 2!} + {0.523599^4 \over 4!} - \cdots

Here we only used 3 terms, since this should be enough to tell us something. Notice that the right side of the equation above involves only the four simple operations, so we can easily calculate its value:

\cos (0.523599 rad) \approx 0.866053 \cdots

On the other hand, trigonometry gives us the exact numerical value of this particular cosine:

\cos 30^\circ = {\sqrt 3 \over 2} \approx 0.866025 \cdots

So our approximating value agrees with the actual value to the fourth decimal, which is good accuracy for a basic approximation. Better accuracy can be achieved by using more terms in the Taylor series.

We can get the same conclusion if we graph the original cosine function and its approximation together as shown in Figure 1-b. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line x = π/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when x grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in Figure 1-c, we can see that there is indeed a difference between these two functions, which we cannot see in a graph of normal scale.

A More Mathematical Explanation

Note: understanding of this explanation requires: *Calculus

The general form of a Taylor series

In this subsection, we will derive a general formul [...]

The general form of a Taylor series

In this subsection, we will derive a general formula for a function's Taylor series using the derivatives a general function f(x). Taylor polynomials are defined as follows:

The Taylor polynomial of degree n for f at a, written as P _n (x), is the polynomial that has the same 0th- to nth-order derivatives as function f(x) at point a. In other words, the nth-degree Taylor polynomial must satisfy:

P _n (a) = f (a) (the 0th-order derivative of a function is itself)

P _n ' (a) = f ' (a)

P _n '' (a) = f '' (a)
P _n ^{(n)} (a) = f^{(n)} (a)

in which P _n ^{(k)} (a) is the kth-order derivative of both P _n (x) and f (x) at a.

The Taylor series T (x) is the Taylor polynomial for which all derivatives at a are equal to those of  f (x) . Note again that our selection of  f (x) is limited to infinitely differentiable functions; this is why  T (x) must be an infinite series.

The following set of images show some examples of Taylor polynomials, from 0th- to 2nd-order:

Figure 2-a0th degree Taylor Polynomial
Figure 2-a
0th degree Taylor Polynomial

Figure 2-bfirst degree Taylor Polynomial
Figure 2-b
first degree Taylor Polynomial

Figure 2-csecond degree Taylor Polynomial
Figure 2-c
second degree Taylor Polynomial

In order to construct a general formula for a Taylor series, we must start with what we know. Using the definition of a power series, we can write the Taylor series of a function f around a as

Eq. 1         T(x) = a_0 + a_1 (x-a)+ a_2 (x-a)^2 + a_3 (x-a)^3 + \cdots

in which a0, a1, a2 ... are unknown coefficients. Our goal is to find these coefficients. From the definition of Taylor polynomials, we know that function f and Taylor series T(x) must have same derivatives of all degrees:

T(a) = f(a) , T'(a) = f'(a) , T''(a) = f''(a) , T ^{(3)} (a) = f ^{(3)} (a) \cdots

How might we use this fact? Let's attempt to evaluate the first few terms by taking the derivative of our general T(x):

T(a) = f(a) = a_0 + a_1(a-a) + a_2(a-a)^2 + a_3(a-a)^3 + \cdots = a_0

T'(a) = f'(a) = a_1 + 2a_2(a-a) + 3a_3(a-a)^2 + 4a_4(a-a)^3 + \cdots = a_1

T''(a) = f''(a) = 2a_2 + 3 \cdot 2 a_3 (a-a) + 4 \cdot 3 a_4 (a-a)^2 + 5 \cdot 4 a_5 (a-a)^3 + \cdots = 2a_2

T^{(3)}(a) = f^{(3)}(a) = 3 \cdot 2 a_3 + 4 \cdot 3 \cdot 2 a_4 (a-a) + 5 \cdot 4 \cdot 3 a_5 (a-a)^2 + \cdots = 3 \cdot 2 a_3

The pattern may now be recognizable. Because each derivative is evaluated at a, all terms but the constant term go to 0. Note then what happens after k derivatives. We get:

T ^{(k)} (a) = k! \cdot a_k = f ^{(k)}(a)

This step is important in understanding the Taylor series both practically and theoretically. The Taylor series approximates an infinitely differentiable function by exploiting qualities of the differentiability of polynomials. In particular, it can be ensured that every degree of derivatives at a is the same for T(x) as for f(x). The k! is a just a result of the derivations of polynomial terms. From this equation, we easily obtain:

a_k = {f ^{(k)}(a) \over k!}

Since 0! = 1, this formula holds for all non-negative integers n. So, using derivatives, we have obtained an expression for all unknown coefficients of the given function f. Substitute them back into Eq. 1 to get an explicit expression of Taylor series:

Eq. 2         T(x) = f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots

or, in summation notation,

 T(x)=\sum_{k=0} ^ {\infin } \frac {f^{(k)}(a)}{k!} \, (x-a)^{k}

This is the standard formula of Taylor series that we will use throughout the rest of this page. In many cases, it is convenient to let a = 0 to get a neater expression:

Eq. 3         T(x) = f(0)+\frac {f'(0)}{1!} x + \frac{f''(0)}{2!} x^2 + \frac{f^{(3)}(0)}{3!}x^3 + \cdots

Eq. 3 is called the Maclaurin series after Scottish mathematician Colin Maclaurin, who made extensive use of these series in the 18th century.

Finding the Taylor series for a specific function

Many Taylor series can be derived using Eq. 2 by substituting in f and a. Here we will demonstrate this process in detail for the natural logarithm function. Other elementary functions, such as sin(x), cos(x), and ex, can be treated similarly. Their Taylor series will also be provided.

The natural log function is:

f (x) = \log (x)

Its derivatives are:

 f'(x)=1/x ,  f''(x)=-1/x^2 ,  f ^{(3)}(x)=2/x^3, \cdots  f ^{(k)}(x) = {{(-1)^{k-1} \cdot (k-1)!} \over x^k}

Since this function and its derivatives are undefined at x = 0, we cannot construct a Maclaurin series (a Taylor series centered at x = 0) for it. Note that, when choosing a, one must select a value not only for which the derivatives f (k)(a) are defined but at which they can be evaluated. Centering our Taylor series at a = 2 would not work because f (0)(2) = log (2) is unknown and, in fact, cannot even be approximated until we have obtained our Taylor series. For the natural log, it makes sense to let a = 1 and compute the derivatives at this point:

 f(1) = \log 1 = 0,  f'(1) = {1 \over 1} = 1,  f''(1) = -{ 1 \over 1^2} = -1,  f ^{(3)} (1) = {2 \over 1^3} = 2,  \cdots  f ^{(k)} (1) = {(-1)^{k-1} \cdot (k-1)!}

Figure 2-dTaylor series for natural log
Figure 2-d
Taylor series for natural log

Substitute these derivatives into Eq. 2, and we can get the Taylor series for  \log (x) centered at x = 1:

 \log (x) = (x-1) - {(x-1)^2 \over 2} + {(x-1)^3 \over 3} + \cdots

We can avoid the cumbersome (x - 1)k notation by introducing a new function g(x) = log (1 + x). Now we can expand our polynomial around x = 0:

 \log (1 + x) = x - {x^2 \over 2} + {x^3 \over 3} - {x^4 \over 4} + \cdots

The animation to the right shows this Taylor polynomial with degree n varying from 0 to 25. As we can see, the left part of this polynomial quickly comes to approximate the original function with fair accuracy. However, the right part exhibits some strange behavior: it seems to diverge farther away from the function as n grows larger. This tells us that a Taylor series is not always a reliable approximation of the original function. The fact that they have same derivatives doesn't guarantee that the Taylor series will represent a suitable approximation at all values of x. Other factors need to be considered.

This is true because power series, like the Taylor series for log(1 + x) do not necessarily converge for all values of x. The Taylor series for natural log is divergent when x > 1, while a valid polynomial approximation needs to be convergent. Same thing happens in the Taylor series of natural log:

 \log (1 + x) = x - {x^2 \over 2} + {x^3 \over 3} - {x^4 \over 4} + \cdots

Let's consider an arbitrary term in this series: \pm x^n \over n. As n increases, the denominator grows linearly, and the numerator grows exponentially. Exponential growth will eventually override linear growth for arbitrarily large n, so the convergence or divergence of the series will be determined by xn. So if x > 1, then the nth term will grow without bound, and the Taylor series will diverge. This is why we observe the abnormal behavior of the right side of Figure 2-d. In this "divergent zone", although we can still write out and evaluate the polynomial, we cannot expect it to approximate the function.
Does this make it impossible to approximate the log(x) for values of x greater than 2? It would seem that this would make our Taylor series useless in many cases. For example, imagine that we want to approximate log(4):

log (4) = log (1 + 3) = 3 - {3^2 \over 2} + {3^3 \over 3} - {3^4 \over 4} \cdots (divergent)

We know that log (4) is defined, but our Taylor series cannot approximate it. Instead, we can write:

log (4) = log (e \cdot {4 \over e}) = log (e) + log ({4 \over e}) \approx 1 + log (1.47152) = 1 + log (1 + 0.47152)

 = 1 + (0.47152 - {0.47152^2 \over 2} + {0.47152^3 \over 3} - {0.47152^4 \over 4} + \cdots) (convergent)

By using the identity log(a ·b ) = log (a ) + log (b ), we saved our approximation from the "divergent zone." Larger powers of e may be used as appropriate for larger values of x.

Using a similar process, we can obtain Taylor series for a variety of other functions, such as the following:

\sin (x) = x - {x^3 \over 3!} + {x^5 \over 5!} - {x^7 \over 7!} + {x^9 \over 9!} - \cdots , expanded around the origin. x is in radians.

\cos (x) = 1 - {x^2 \over 2!} + {x^4 \over 4!} - {x^6 \over 6!} + {x^8 \over 8!} - \cdots , expanded around the origin. x is in radians.

e^x = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + {x^5 \over 5!} + \cdots , expanded around the origin. e is Euler's number.

Note that the powers of each successive term in the Taylor series for sine and cosine increase by 2, and each term alternates between positive and negative; this makes sense when we consider the nature of successive derivatives of sin (x) and cos(x) at x = 0. The Taylor series for ex follows from the fact that the derivative of ex is itself. Let it the derivation of these series using Eq. 2 be left to the reader.

Error bound of a Taylor series

error things here

Why It's Interesting

Figure 1-aA modern TI calculator
Figure 1-a
A modern TI calculator

Have you ever wondered how calculators determine square roots, sines, cosines, and exponentials? For instance, if you were to type \sin{\pi \over 2} or e^2 into your calculator, how does it determine which value to spit out? The number must be related to our input in some way, but what exactly is the relationship? Does the calculator just read from an index of known values? Is there a more mathematical and precise way for the calculator to evaluate these functions?

The answer to this latter question is yes. There are algorithms that give an approximate value of sine, for example, using only the four basic operations (+, -, x, /)[1]. Before the age of electronic calculators, mathematicians studied these algorithms in order to approximate these functions manually. The Taylor series, named after English mathematician Brook Taylor, is one such way of making these approximations. Basically, Taylor said that there is a way to expand any infinitely differentiable function into a polynomial series about a certain point. The power of the Taylor series is to approximate certain functions that cannot otherwise be calculated.

The calculator's algorithm uses this method to efficiently find a suitable approximation in the form of a polynomial series. Expanding enough terms for several digits of accuracy is easy for a computing device, even though Taylor series may look daunting and tedious to the naked eye. This algorithm is built in the permanent memory (ROM) of electronic calculators, and is triggered when a function like sine or cosine is called[2].

As is shown in the More Mathematical Explanation, Taylor series can be used to derive many interesting and useful series. Some of these series have helped mathematicians to approximate the values of important irrational constants such as \pi and e.

Approximating pi

\pi, or the ratio of a circle's circumference to its diameter, is one of the oldest, most important, and most interesting mathematical constants. The earliest documentation of \pi can be traced back to ancient Egypt and Babylon, in which people used empirical values of \pi such as 25/8 = 3.1250, or (16/9)2 ≈ 3.1605[3].

Figure 4-aArchimedes' method to approximate π
Figure 4-a
Archimedes' method to approximate π

The first recorded algorithm for rigorously calculating the value of \pi was a geometrical approach using polygons, devised around 250 BC by the Greek mathematician Archimedes. Archimedes computed upper and lower bounds of \pi by drawing regular polygons inside and outside a circle, and calculating the perimeters of the outer and inner polygons. He proved that 223/71 < \pi < 22/7 by using a 96-sided polygon, which gives us 2 accurate decimal digits: π ≈ 3.14[4].

Mathematicians continued to use this polygon method for the next 1,800 years. The more sides their polygons have, the more accurate their approximations would be. This approach peaked at around 1600, when the Dutch mathematician Ludolph van Ceulen used a 260 - sided polygon to obtain the first 35 digits of \pi[5]. He spent a major part of his life on this calculation. In memory of his contribution, sometimes \pi is still called "the Ludolphine number".

However, mathematicians have had enough of trillion-sided polygons. Starting from the 17th century, they devised much better approaches for computing \pi, using calculus rather than geometry. Mathematicians discovered numerous infinite series associated with \pi , and the most famous one among them is the Leibniz series:

{\pi \over 4} = 1 - {1 \over 3} + {1 \over 5} - {1 \over 7} + {1 \over 9} \cdots

We will explain how Leibniz got this amazing result and how it allowed him to approximate \pi.

This amazing series comes directly from the Taylor series of arctan(x)...

This amazing series comes directly from the Taylor series of arctan(x):

Eq. 4a        \arctan (x) = x - {x^3 \over 3} + {x^5 \over 5} - {x^7 \over 7} + {x^9 \over 9} \cdots

We can get Eq. 4a by directly computing the derivatives of all orders for arctan(x) at x = 0, but the calculation involved is rather complicated. There is a much easier way to do this if we notice the following fact:

Eq. 4b        {{d \arctan (x)} \over dx} = {1 \over {1 + x^2}}

Recall that we gave the summation formula of geometric series in the More Mathematical Explanation section :

{ 1 \over {1 - r}} = 1 + r + r^2 + r^3 + r^4 \cdots , -1 < r < 1

If we substitute r = - x2 into the summation formula above, we can expand the right side of Eq. 4b into an infinite sequence:

Figure 4-bGottfried Wilhelm LeibnizDiscoverer of Leibniz series
Figure 4-b
Gottfried Wilhelm Leibniz
Discoverer of Leibniz series

{ 1 \over {1 + x^2}} = 1 - x^2 + x^4 - x^6 + x^8 \cdots

So Eq. 4b changes into:

{{d \arctan (x)} \over dx} = 1 - x^2 + x^4 - x^6 + x^8 \cdots

Integrating both sides gives us:

\arctan (x) = x - {x^3 \over 3} + {x^5 \over 5} - {x^7 \over 7} + {x^9 \over 9} \cdots + C

Let x = 0, this equation changes into 0 = C . So the integrating constant C vanishes, and we get Eq. 4a.

One may notice that, like Taylor series of many other functions, this series is not convergent for all values of x. It only converges for -1 ≤ x ≤ 1. Fortunately, this is just enough for us to proceed. Substituting x = 1 into it, we can get the Leibniz series:

{\pi \over 4} = 1 - {1 \over 3} + {1 \over 5} - {1 \over 7} + {1 \over 9} \cdots

The Leibniz series gives us a radically improved way to approximate \pi: no polygons, no square roots, just the four basic operations. However, this particular series is not very efficient for computing \pi, since it converges rather slowly. The first 1,000 terms of Leibniz series give us only two accurate digits: π ≈ 3.14. This is horribly inefficient, and most mathematicians would prefer not to use this algorithm.

Fortunately, we can get series that converge much faster if we substitute smaller values of x , such as 1 \over \sqrt{3} , into Eq. 4a:

\arctan {1 \over \sqrt{3}} = {\pi \over 6} = {1 \over \sqrt{3}} - {1 \over {3 \cdot 3 \sqrt{3}}} + {1 \over {5 \cdot 3^2 \sqrt{3}}} - {1 \over {7 \cdot 3^3 \sqrt{3}}} \cdots

which gives us:

\pi = \sqrt{12}(1 - {1 \over {3 \cdot 3}} + {1 \over {5 \cdot 3^2}} - {1 \over {7 \cdot 3^3}} + \cdots)

This series is much more efficient than the Leibniz series, since there are powers of 3 in the denominators. The first 10 terms of it give us 5 accurate digits, and the first 100 terms give us 50. Leibniz himself used the first 22 terms to compute an approximation of pi correct to 11 decimal places as 3.14159265358.

However, mathematicians are still not satisfied with this efficiency. They kept substituting smaller x values into Eq. 4a to get more convergent series. Among them is Leonhard Euler, one of the greatest mathematicians in the 18th century. In his attempt to approximate \pi, Euler discovered the following non-intuitive formula:

Eq. 4c        \pi = 20 \arctan {1 \over 7} + 8 \arctan {3 \over 79}

Although Eq. 4c looks really weird, it is indeed an equality, not an approximation. The following hidden section shows how it is derived in detail:

Eq. 4c comes from the trigonometric identity of the tangent of two angles. Suppose we have 3 angles, \alpha, \beta, and \gamma that satisfy:

\gamma = \alpha - \beta

Then the trigonometric identity gives us:

\tan \gamma = \tan (\alpha - \beta) = {{\tan \alpha - \tan \beta} \over {1 + \tan \alpha \cdot \tan \beta}}

Let \tan \alpha = a , \tan \beta = b, and substitute into the equation above:

\tan \gamma = {{a - b} \over {1 + a \cdot b}} , or \gamma = \arctan {{a - b} \over {1 + a \cdot b}}

Recall that we have the relationship:

\alpha - \beta = \gamma

Change the angles into arctan functions:

\arctan(a)  - \arctan (b) = \arctan {{a - b} \over {1 + a \cdot b}}

If we move arctan(b) to the right side, we will get Euler's arctangent addition formula, which is the most important formula in this hidden section:

Eq. 4d        \arctan(a) = \arctan (b) + \arctan {{a - b} \over {1 + a \cdot b}}

What Eq. 4d does is that, it takes a large angle, arctan(a), and divides it into two smaller angles, as shown in Figure 4-c. From our previous discussion, we know that the series we use to estimate \pi gets more convergent when we plug in smaller angles. So this formula helps us to get more efficient algorithms.

Figure 4-cDividing an angle
Figure 4-c
Dividing an angle

Euler himself used this formula to get his algorithm for estimating \pi. He started from a simple fact:

Step 1        {\pi \over 4} = \arctan 1

To divide this angle into smaller angles, we can plug a = 1 and b = 1/2 into Eq. 4d:

\arctan 1 = \arctan {1 \over 2} + \arctan {1 \over 3}

So it turns out that the angle left is arctan (1/3). Substituting this into Step 1 yields:

Figure 4-dEuler's approximation of
Figure 4-d
Euler's approximation of \pi

Step 2        {\pi \over 4} = \arctan {1 \over 2} + \arctan {1 \over 3}

Next, let's focus on the angle arctan (1/2). Plug a = 1/2 and b = 1/3 into Eq. 4d:

\arctan {1 \over 2} = \arctan {1 \over 3} + \arctan {1 \over 7}

Substitute this into Step 2:

Step 3        {\pi \over 4} = 2\arctan {1 \over 3} + \arctan {1 \over 7}

We can keep doing this, using the Euler's arctangent addition formula to get smaller and smaller angles:

\arctan {1 \over 3} = \arctan {1 \over 7} + \arctan {2 \over 11} (a = 1/3 , b = 1/7)

Step 4        {\pi \over 4} = 3\arctan {1 \over 7} + 2\arctan {2 \over 11}

\arctan {2 \over 11} = \arctan {1 \over 7} + \arctan {3 \over 79} (a = 2/11 , b = 1/7)

Step 5        {\pi \over 4} = 5\arctan {1 \over 7} + 2\arctan {3 \over 79}

Here we have gotEq. 4c, the formula that Euler used to approximate \pi. Figure 4-d shows a graphic representation of these 5 steps.

We can certainly carry on to keep dividing it into even smaller angles, or try different values for a and b to get different series, but Euler stopped here because he thought these angles were small enough to give him an efficient algorithm.

The next step is to expand Eq. 4c using Taylor series, which allows us to do the numeric calculations:

\pi = 20 ({1 \over 7} - {1 \over 3 \cdot 7^3} + {1 \over 5 \cdot 7^5} - {1 \over 7 \cdot 7^7} \cdots)

+ 8 ({3 \over 79} - {3^3 \over 3 \cdot 79^3} + {3^5 \over 5 \cdot 79^5} - {3^7 \over 7 \cdot 79^7} \cdots)

This series converges so fast that each term of it gives more than 1 digit of \pi. Using this algorithm, it will not take more several days to calculate the first 35 digits of \pi with pencil and paper, which Ludolph spent most of his life on.

Although Euler himself has never undertaken the calculation, this idea was developed and used by many other mathematicians at his time. In 1789, the Slovene mathematician Jurij Vega calculated the first 140 decimal places for \pi of which the first 126 were correct. This record was broken in 1841, when William Rutherford calculated 208 decimal places with 152 correct ones. By the time of the invention of electronic digital computers, \pi had been expanded to more than 500 digits. And we shouldn't forget that all of these started from the Taylor series of trigonometric functions.

Acknowledgement: Most of the historical information in this section comes from these this article: click here[6].

Approximating e

The mathematical constant  e , approximately equal to 2.71828, is also called Euler's Number. This important constant appears in calculus, differential equations, complex numbers, and many other branches of mathematics. It's also widely used in other disciplines like physics and engineering. So we would really like to know its exact value as closely as possible.

Figure 5-aDefinition of
Figure 5-a
Definition of e

 e is defined:

 e = \lim_{n \to \infin} (1 + {1 \over n}) ^n

In principle, we can approximate  e using this definition. However, this method is slow and inefficient, so mathematicians have tried to find another one. For example, let n = 100 and substitute it into the definition. We get:

 e \approx (1 + {1 \over 100}) ^{100} = 2.70481 \cdots

This is only accurate to 2 accurate digits. This is horrible accuracy for an approximating algorithm, so we have to find an alternative. One such alternative approximation can be found using Taylor series. Using calculus, we can derive the Taylor series for ex and use it to make our approximation.

ex has the very convenient property...

ex has a very convenient property:

\frac{d}{dx} e^x = e^x

The proof of this property can be found in almost every calculus textbook. It tells us that all derivatives of the exponential function are equal:

 f(x) = f'(x) = f''(x) = f ^{(3)}(x) = \cdots = e^x,


 f(0) = f'(0) = f''(0) = f ^{(3)}(0) = \cdots = 1

Substitute these derivatives into Eq. 2, the general formula of Taylor Series. We get:

e^x = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots

Let x = 1 to approximate  e :

e = 1 + 1 + {1 \over 2!} + {1 \over 3!} + {1 \over 4!} + \cdots

This sequence converges quickly, since there are factorials in the denominators of each term, and factorials grow really fast as n increases. Just take the first 10 terms and we can get:

e \approx 1 + 1 + {1 \over 2!} + {1 \over 3!} + {1 \over 4!} + \cdots + {1 \over 9!} = 2.718281801 \cdots

The real value of  e is 2.718281828··· , so we have obtained 7 accurate digits! Compared to the approximation by definition, which gives us only two digits at order 100, this algorithm is incredibly fast and efficient.

In fact, we can get the same conclusion if we plot the function ex and its two approximations together, and see which one converges faster. We already have the Taylor series approximation:

Figure 5-bTwo approximations of ex. Taylor series is much faster.
Figure 5-b
Two approximations of ex. Taylor series is much faster.

 e^x = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + \cdots + {x^n \over n!}

We can also find the powers of e using the definition:

 e^x = (\lim_{n \to \infin} (1 + {1 \over n}) ^{n})^x = \lim_{n \to \infin} (1 + {1 \over n}) ^{nx} = \lim_{n \to \infin} (1 + {x \over nx}) ^{nx} = \lim_{{n'} \to \infin} (1 + {x \over {n'}}) ^ {n'}

 = \lim_{{n} \to \infin} (1 + {x \over {n}}) ^ {n}

in which n' = n·x . We can switch between n' and n because both of the go to infinity, and which one we use doesn't matter.

In Figure 5-b, these two approximations are graphed together to approximate the original function ex. As we can see in the animation, Taylor series approximates the original function much faster than the definition does.

Teaching Materials

There are currently no teaching materials for this page. Add teaching materials.


  1. How does the calculator find values of sine, from homeschoolmath. This is an article about calculator programs for approximating functions.
  2. Calculator, from Wikipedia. This article explains the structure of an electronic calculator.
  3. Pi, from Wolfram MathWorld. This article contains some history of Pi.
  4. Archimedes' Approximation of Pi. This is a thorough explanation of Archimedes' method.
  5. Digits of Pi, by Barry Cipra. Documentation of Ludolph's work is included here.
  6. How Euler Did It, by Ed Sandifer. This articles talks about Euler's algorithm for estimating π.

If you are able, please consider adding to or editing this page!

Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.

Personal tools