Taylor Series

From Math Images

(Difference between revisions)
Jump to: navigation, search
Line 50: Line 50:
So our approximating value agrees with the actual value to the fourth digit, which is good accuracy for a 3-term-long approximation. Of course, better accuracy can be achieved by using more terms in the Taylor series.
So our approximating value agrees with the actual value to the fourth digit, which is good accuracy for a 3-term-long approximation. Of course, better accuracy can be achieved by using more terms in the Taylor series.
<br><br>
<br><br>
-
We can get the same conclusion if we graph the original cosine function and its approximation together as shown in [[#Figure1-b|Figure 1-b]]. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line ''x'' = Pi/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when ''x'' grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in [[#Figure1-c|Figure 1-c]], we can see that there is indeed a tiny difference between these two functions, which we cannot see in a graph of normal scale.
+
We can get the same conclusion if we graph the original cosine function and its approximation together as shown in [[#Figure1-b|Figure 1-b]]. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line ''x'' = &pi;/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when ''x'' grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in [[#Figure1-c|Figure 1-c]], we can see that there is indeed a tiny difference between these two functions, which we cannot see in a graph of normal scale.
<br><br>
<br><br>
The calculator's algorithm is an improved version of this method. It may be more efficient, more accurate, and more general, but it still evaluates the numerical value of a polynomial series. This algorithm is built in the permanent memory (ROM) of electronic calculators, and is triggered every time we enter the function<ref name = ref2>[http://en.wikipedia.org/wiki/Calculator Calculator], from Wikipedia. This article explains the structure of an electronic calculator.</ref>.
The calculator's algorithm is an improved version of this method. It may be more efficient, more accurate, and more general, but it still evaluates the numerical value of a polynomial series. This algorithm is built in the permanent memory (ROM) of electronic calculators, and is triggered every time we enter the function<ref name = ref2>[http://en.wikipedia.org/wiki/Calculator Calculator], from Wikipedia. This article explains the structure of an electronic calculator.</ref>.

Revision as of 15:21, 17 July 2012

Image:inprogress.png
Taylor Series
Field: Algebra
Image Created By: Peng Zhao
Website: Math Images Project

Taylor Series


A Taylor series, or Taylor polynomial, is a function's polynomial expansion that approximates the value of this function around a certain point. For example, the animation at right shows the function y = sin(x) and its expanded Taylor series around the origin:


\sin (x) = x - {x^3 \over 3!} + {x^5 \over 5!} - {x^7 \over 7!} ... + \sin({n\pi \over 2}) \cdot {x^n \over n!}


with n varying from 0 to 36. As we can see, the larger n is, the more terms we will have in the Taylor polynomial, and the more it looks like the original function. If n goes to infinity, then our approximating polynomial will be identical to the original function y = sin(x).


For the math behind this, please go to the More Mathematical Explanation section.


Contents

Basic Description


Figure 1-aA modern TI calculator
Figure 1-a
A modern TI calculator


Have you ever wondered how calculators work? How do they calculate square roots, sines, cosines, and exponentials? For example, if we type the sine of an angle into our calculator, then it will magically spit out a number. We know this number must be related to our input in some way, but what exactly is this relationship? Is the calculator just reading off of a list created from people who used rulers to physically measure the distance on a graph, or is there a more mathematical relationship?

The answer to the last question above is yes. There are algorithms that give an approximate value of sine, using only the four basic operations (+, -, x, /)[1]. Mathematicians studied these algorithms in order to calculate these functions manually before the age of electronic calculators. One such algorithm is given by the Taylor series, named after English mathematician Brook Taylor. Basically, Taylor said that there is a way to expand any infinitely differentiable function into a polynomial series around a certain point. This process uses a fair amount of single variable calculus, which will be explained in the More Mathematical Explanation section. Here we will only give some examples of Taylor series without explanation:


\sin (x) = x - {x^3 \over 3!} + {x^5 \over 5!} - {x^7 \over 7!} + {x^9 \over 9!} \cdots , expanded around the origin. x is in radians.


\cos (x) = 1 - {x^2 \over 2!} + {x^4 \over 4!} - {x^6 \over 6!} + {x^8 \over 8!} \cdots , expanded around the origin. x is in radians.


e^x = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + {x^5 \over 5!} \cdots , expanded around the origin. e is Euler's number, approximately equal to 2.71828···


\log (x) = (x-1) - {(x-1)^2 \over 2} + {(x-1)^3 \over 3} - {(x-1)^4 \over 4} \cdots , expanded around the point x = 1.



With Taylor polynomials in hand, we can easily calculate the numerical value of these functions. For example, if we want to calculate:

Figure 1-b3-term-approximation of function y = cos(x)Click for an image of higher resolution
Figure 1-b
3-term-approximation of function y = cos(x)
Click for an image of higher resolution

Figure 1-cThe above approximation zoomed in 2,000 timesClick for an image of higher resolution
Figure 1-c
The above approximation zoomed in 2,000 times
Click for an image of higher resolution


\cos 30^\circ



first we need to convert degrees to radians in order to use the Taylor series:

\cos 30^\circ = \cos {\pi \over 6} = \cos 0.523599 \cdots



then, substitute into the Taylor series of cosine above:

\cos (0.523599 rad) = 1 - {0.523599^2 \over 2!} + {0.523599^4 \over 4!} - \cdots



Here we only used 3 terms, since this should be enough to tell us something. Notice that the right side of the equation above involves only the four simple operations, so we can easily calculate its value:

\cos (0.523599 rad) = 0.866053 \cdots



On the other hand, trigonometry tells us the exact numerical value of this particular cosine:

\cos 30^\circ = {\sqrt 3 \over 2} = 0.866025 \cdots



So our approximating value agrees with the actual value to the fourth digit, which is good accuracy for a 3-term-long approximation. Of course, better accuracy can be achieved by using more terms in the Taylor series.

We can get the same conclusion if we graph the original cosine function and its approximation together as shown in Figure 1-b. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line x = π/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when x grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in Figure 1-c, we can see that there is indeed a tiny difference between these two functions, which we cannot see in a graph of normal scale.

The calculator's algorithm is an improved version of this method. It may be more efficient, more accurate, and more general, but it still evaluates the numerical value of a polynomial series. This algorithm is built in the permanent memory (ROM) of electronic calculators, and is triggered every time we enter the function[2].


A More Mathematical Explanation

Note: understanding of this explanation requires: *Calculus


How to derive Taylor Series from a given function


In this subsection, we are going to d [...]


How to derive Taylor Series from a given function


In this subsection, we are going to derive an explicit and general expression of a function's Taylor series, using only the derivatives of the given function f(x).

Mathematically, Taylor polynomials and Taylor series can be defined in the following way:

The Taylor polynomial of degree n for f at a, written as P _n (x), is the polynomial that has the same 0th to nth order derivatives as function f(x) at point a. In other words, the nth degree Taylor polynomial must satisfy:


P _n (a) = f (a) (the 0th order derivative of a function is just itself)


P _n ' (a) = f ' (a)


P _n '' (a) = f '' (a)
\vdots
P _n ^{(n)} (a) = f^{(n)} (a)


in which P _n ^{(k)} (a) is the kth order derivative of P _n (x) at a.


The Taylor series T (x) is just P _n (x) with infinitely large degree n. Notice the f must be infinitely differentiable in order to have a Taylor series.


The following set of images show some examples of Taylor polynomials, from 0th order to 2nd order:

Figure 2-a0th degree Taylor Polynomial
Figure 2-a
0th degree Taylor Polynomial

Figure 2-bfirst degree Taylor Polynomial
Figure 2-b
first degree Taylor Polynomial

Figure 2-csecond degree Taylor Polynomial
Figure 2-c
second degree Taylor Polynomial


From the definition above, the function f and its 0th order Taylor polynomial P _0 (x) must have the same 0th order derivatives at a. Since the 0th order derivative of a function is just itself by definition, we have:

P _0 ^{(0)} (a) = P _0 (a) = f(a)


which gives us the horizontal line shown in Figure 2-a. This is certainly not a very close approximation. So we need to add more terms.

The first order Taylor polynomial P _1 (x) must satisfy:

\left\{ \begin{array}{rcl} P _1 (a) & \mbox{=} & f(a) \\ P _1 ' (a) & \mbox{=} & f ' (a)\end{array}\right.


which gives us the linear approximation shown in Figure 2-b. This approximation is much better than the 0th order one.

Similarly, the second degree Taylor polynomial P _2 (x) must satisfy:

\left\{ \begin{array}{rcl} P _2 (a) & \mbox{=} & f(a) \\ P _2 ' (a) & \mbox{=} & f ' (a) \\ P _2 '' (a) & \mbox{=} & f '' (a)\end{array}\right.


which gives us the quadratic approximation shown in Figure 2-c. This is the best approximation so far.

As we can see, the quality of our approximation increases as we add more terms to the Taylor polynomial. Since Taylor series is the Taylor polynomial of infinitely large degree, it should be a perfect approximation - identical to the original function.

Taylor proved that such a series T(x) must exist for every infinitely differentiable function f. In fact, without loss of generality, we can write the Taylor series of a function f around a as

Eq. 1         T(x) = a_0 + a_1 (x-a)+ a_2 (x-a)^2 + a_3 (x-a)^3 + \cdots


in which a0, a1, a2 ... are unknown coefficients. What's more, from the definition of Taylor polynomials, we know that function f and Taylor series T(x) must have same derivatives of all degrees:

T(a) = f(a) , T'(a) = f'(a) , T''(a) = f''(a) , T ^{(3)} (a) = f ^{(3)} (a) \cdots


Using the constraints above, we can determine the value of all unknown coefficients in Eq. 1. Just substitute T ^{(n)} (a) = f ^{(n)} (a) into Eq. 1 and we can get:

T ^{(n)} (a) = n! \cdot a_n = f ^{(n)}(a)


in which the terms before an vanished because their associated power of (x - a ) didn't survive taking derivatives for n times. The terms after an vanished because there are still (x - a ) terms left, which make them equal to 0 when x = a. So we are left with this simple equation, from which we can directly get:

a_n = {f ^{(n)}(a) \over n!}


If we agree to define 0! = 1, then this formula holds for all non-negative integers n from 0 to infinity. So we have determined the value of all unknown coefficients using derivatives of the given function f. Substitute them back into Eq. 1 to get an explicit expression of Taylor series:

Eq. 2         T(x) = f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots


or in summation notation,

 T(x)=\sum_{k=0} ^ {\infin } \frac {f^{(k)}(a)}{k!} \, (x-a)^{k}


This is the standard formula of Taylor series that we are going to use in the rest of this article. In most cases we would like to let a = 0 to get a neater expression:

Eq. 3         T(x) = f(0)+\frac {f'(0)}{1!} x + \frac{f''(0)}{2!} x^2 + \frac{f^{(3)}(0)}{3!}x^3 + \cdots


Eq. 3 is also called Maclaurin series, named after Scottish mathematician Colin Maclaurin, who made extensive use of these series in the 18th century.


We have given some examples of Taylor series in the Basic Description section. They are easy to derive using Eq. 2 - just substitute f and a into it, then compute the derivatives. Here we are going to do this in detail for only one function: the natural log. Other elementary functions, such as sin(x), cos(x), and ex, can be treated in a similar manner.

Our natural log function is:

f (x) = \log (x)


Its derivatives are:

 f'(x)=1/x ,  f''(x)=-1/x^2 ,  f ^{(3)}(x)=2/x^3, \cdots  f ^{(k)}(x) = {{(-1)^{k-1} \cdot (k-1)!} \over x^k}


Since this function and its derivatives are not defined at x = 0, we cannot use Maclaurin series for it. Instead we can let a = 1 and compute the derivatives at this point:

 f(1) = \log 1 = 0,  f'(1) = {1 \over 1} = 1,  f''(1) = -{ 1 \over 1^2} = -1,  f ^{(3)} (1) = {2 \over 1^3} = 2,  \cdots  f ^{(k)} (1) = {(-1)^{k-1} \cdot (k-1)!}


Figure 2-dTaylor series for natural log
Figure 2-d
Taylor series for natural log

Substitute these derivatives into Eq. 2, and we can get the Taylor series for  \log (x) centered at x = 1:

 \log (x) = (x-1) - {(x-1)^2 \over 2} + {(x-1)^3 \over 3} + \cdots


What's more, we can avoid the cumbersome (x - 1)k notation by introducing a new function g(x) = log (1 + x). Now we can expand it around x = 0:

 \log (1 + x) = x - {x^2 \over 2} + {x^3 \over 3} - {x^4 \over 4} + \cdots


The animation to the right shows this Taylor polynomial with degree n varying from 0 to 25. As we can see, the left part of this polynomial soon approximates the original function as we have expected. However, the right part demonstrates some strange behavior: it seems to diverge farther away from the function as n grows larger. This tells us that Taylor series is not always a reliable approximation of the original function. Just the fact that they have same derivatives doesn't guarantee they are the same thing. There are more requirements needed.

This leads us to the discussion of convergent and divergent sequences in the next subsection.


To converge or not to converge, this is the question


From the last example of natural log, we can see that sometimes Taylor series fail to approximate their original functions. This happens because the Taylor series for natural log is divergent when x > 1, while a valid polynomial approximation needs to be convergent. Here are the definitions of convergence and divergence:

Let our infinite sequence be:


A = a_1, a_2, a_3 , a_4 \cdots ,


and define its sum series to be:


s_n = a_1 + a_2 + \cdots + a_n


The sequence A is said to be convergent if the following limit exists:


 \lim_{n \to \infin}s_n = L


If this limit doesn't exist, then the series A is said to be divergent.



As we can see in the definition, whether a sequence is convergent or not depends on its sum series. If the sequence is "summable" when n goes to infinity, then its convergent. If it's not, then it's divergent. Following are some examples of convergent and divergent sequences:

Seq. 1        2 = 1 + {1 \over 2} + {1 \over 4} + {1 \over 8} + {1 \over 16} \cdots , convergent.


Seq. 2        {\pi \over 4} = 1 - {1 \over 3} + {1 \over 5} - {1 \over 7} + {1 \over 9} \cdots , convergent.


Seq. 3        1 - 2 + 4 - 8 + 16 - 32 \cdots , divergent. Vibrates above and below 0 with increasing magnitudes.


Seq. 4        1 + {1 \over 2} + {1 \over 3} + {1 \over 4} + {1 \over 5} \cdots , divergent. Adds up to infinity.


Seq. 1 comes directly from the summation formula of geometric sequences. Seq. 2 is a famous summable sequence discovered by Leibniz. We are going to briefly explain these sequences in the following sections.

Seq. 3 and Seq. 4 are divergent because both of them add up to infinity. However, there is one important difference between them. On one hand, Seq. 3 has terms going to infinity, so it's not surprising that this one is not summable. On the other hand, Seq. 4 has terms going to zero, but they still have an infinitely large sum! This counter-intuitive result was first proved by Johann Bernoulli and Jacob Bernoulli in 17th century. In fact, this sequence is so epic in the history of math that mathematicians gave it a special name: the harmonic series. Click here for a proof of the divergence of harmonic series[3].


By definition, divergent series are not summable. So if we talk about the "sum" of these series, we may get ridiculous results. For example, look at the summation formula of geometric series:

{ 1 \over {1 - r}} = 1 + r + r^2 + r^3 + \cdots


This formula could be easily derived with a little manipulation of algebra, or by expanding the Maclaurin series of the left side. Click here for a simple proof[4]. However, what we want to show here is that this formula doesn't work for all values of r. For values less than 1, such as 1/2, we can get reasonable results like:

2 = 1 + {1 \over 2} + {1 \over 4} + {1 \over 8} + {1 \over 16} \cdots


However, if the value of r is larger than 1, such as 2, things start to get weird:

-1 = 1 + 2 + 4 + 8 + 16 \cdots


How can we get a negative number by adding a bunch of positive integers? Well, if this case makes mathematicians uncomfortable, then they are going to be even more puzzled by the following one, in which r = -2:

{1 \over 3} = 1 - 2 + 4 - 8 + 16 \cdots


This is ridiculous: the sum of integers can not possibly be a fraction. In fact, we are getting all these funny results because the last two series are divergent, so their sums are not defined. See the following images for a graphic representation of these series:

Figure 3-aGeometric Sequence with r = 1/2
Figure 3-a
Geometric Sequence with r = 1/2

Figure 3-bGeometric Sequence with r = 2
Figure 3-b
Geometric Sequence with r = 2

Figure 3-cGeometric Sequence with r = -2
Figure 3-c
Geometric Sequence with r = -2


In the images above, the blue lines trace the geometric sequences, and the red lines trace their sum series. As we can see, the first sequence with r = 1/2 does have a limited sum, since its sum series converge to a finite value as n increases. However, the sum series of the other two sequences don't converge to anything. They never settle around a finite value. Thus the second and third sequences diverge, and their sums don't exist. Although we can still write down the summation formula in principle, this formula is meaningless. So no wonder we have got those weird results.

Same thing happens in the Taylor series of natural log:

 \log (1 + x) = x - {x^2 \over 2} + {x^3 \over 3} - {x^4 \over 4} + \cdots


Let's look at an arbitrary term in this series: ±xn / n. As n increases, the denominator is experiencing a linear growth, and the numerator is experiencing an exponential growth. It is a known fact that exponential growth will eventually override linear growth, as long as the absolute value of x is larger than one. So if x > 1, then the terms xn / n will go to infinity, and this Taylor series will be divergent. This is why we saw the abnormal behavior of the right side of Figure 2-d. In this "divergent zone", although we can still write down the polynomial, it's no longer a valid approximation of the function. For example, if we want to calculate the value of log 4, instead of writing:

log (4) = log (1 + 3) = 3 - {3^2 \over 2} + {3^3 \over 3} - {3^4 \over 4} \cdots (divergent)


we have to write:

log (4) = log (e \cdot {4 \over e}) = 1 + log ({4 \over e}) = 1 + log (1.47152)


 = 1 + 0.47152 - {0.47152^2 \over 2} + {0.47152^3 \over 3} - {0.47152^4 \over 4} + \cdots (convergent)


in which we saved it from the "divergent zone" to the "convergent zone" by using the identity log(a ·b ) = log (a ) + log (b ).


Why It's Interesting


As we have stated before, Taylor series can be used to derive many interesting sequences, which helped mathematicians to determine the values of important math constants such as \pi and e.

Approximating Pi


\pi, or the ratio of a circle's circumference to its diameter, is one of the oldest, most important, and most interesting mathematical constants. The earliest documentation of \pi can be traced back to ancient Egypt and Babylon, in which people used empirical values of \pi such as 25/8 = 3.1250, or (16/9)2 ≈ 3.1605[5].

Figure 4-aArchimedes' method to approximate π
Figure 4-a
Archimedes' method to approximate π

The first recorded algorithm for rigorously calculating the value of \pi was a geometrical approach using polygons, devised around 250 BC by the Greek mathematician Archimedes. Archimedes computed upper and lower bounds of \pi by drawing regular polygons inside and outside a circle, and calculating the perimeters of the outer and inner polygons. He proved that 223/71 < \pi < 22/7 by using a 96-sided polygon, which gives us 2 accurate decimal digits: π ≈ 3.14[6].

Mathematicians continued to use this polygon method for the next 1,800 years. The more sides their polygons have, the more accurate their approximations would be. This approach peaked at around 1600, when the Dutch mathematician Ludolph van Ceulen used a 260 - sided polygon to obtain the first 35 digits of \pi[7]. He spent a major part of his life on this calculation. In memory of his contribution, sometimes \pi is still called "the Ludolphine number".

However, mathematicians have had enough of trillion-sided polygons. Starting from the 17th century, they devised much better approaches for computing \pi, using calculus rather than geometry. Mathematicians discovered numerous infinite series associated with \pi , and the most famous one among them is the Leibniz series:

{\pi \over 4} = 1 - {1 \over 3} + {1 \over 5} - {1 \over 7} + {1 \over 9} \cdots


We have seen the Leibniz series as an example of convergent series in the More Mathematical Explanation section. Here we are going to briefly explain how Leibniz got this result. This amazing sequence comes directly from the Taylor series of arctan(x):

Eq. 4a        \arctan (x) = x - {x^3 \over 3} + {x^5 \over 5} - {x^7 \over 7} + {x^9 \over 9} \cdots


We can get Eq. 4a by directly computing the derivatives of all orders for arctan(x) at x = 0, but the calculation involved is rather complicated. There is a much easier way to do this if we notice the following fact:

Eq. 4b        {{d \arctan (x)} \over dx} = {1 \over {1 + x^2}}


Recall that we gave the summation formula of geometric series in the More Mathematical Explanation section :

{ 1 \over {1 - r}} = 1 + r + r^2 + r^3 + r^4 \cdots , -1 < r < 1


If we substitute r = - x2 into the summation formula above, we can expand the right side of Eq. 4b into an infinite sequence:

Figure 4-bGottfried Wilhelm LeibnizDiscoverer of Leibniz series
Figure 4-b
Gottfried Wilhelm Leibniz
Discoverer of Leibniz series


{ 1 \over {1 + x^2}} = 1 - x^2 + x^4 - x^6 + x^8 \cdots


So Eq. 4b changes into:

{{d \arctan (x)} \over dx} = 1 - x^2 + x^4 - x^6 + x^8 \cdots


Integrating both sides gives us:

\arctan (x) = x - {x^3 \over 3} + {x^5 \over 5} - {x^7 \over 7} + {x^9 \over 9} \cdots + C


Let x = 0, this equation changes into 0 = C . So the integrating constant C vanishes, and we get Eq. 4a.

One may notice that, like Taylor series of many other functions, this series is not convergent for all values of x. It only converges for -1 ≤ x ≤ 1. Fortunately, this is just enough for us to proceed. Substituting x = 1 into it, we can get the Leibniz series:

{\pi \over 4} = 1 - {1 \over 3} + {1 \over 5} - {1 \over 7} + {1 \over 9} \cdots


The Leibniz series gives us a radically improved way to approximate \pi: no polygons, no square roots, just the four basic operations. However, this particular series is not suitable for computing \pi, since it converges too slowly. The first 1,000 terms of Leibniz series give us only two accurate digits: π ≈ 3.14. This is horribly inefficient, and no mathematicians will ever want to use this algorithm.

Fortunately, we can get series that converge much faster if we substitute smaller values of x , such as 1 \over \sqrt{3} , into Eq. 4a:

\arctan {1 \over \sqrt{3}} = {\pi \over 6} = {1 \over \sqrt{3}} - {1 \over {3 \cdot 3 \sqrt{3}}} + {1 \over {5 \cdot 3^2 \sqrt{3}}} - {1 \over {7 \cdot 3^3 \sqrt{3}}} \cdots


which gives us:

\pi = \sqrt{12}(1 - {1 \over {3 \cdot 3}} + {1 \over {5 \cdot 3^2}} - {1 \over {7 \cdot 3^3}} + \cdots)


This series is much more efficient than the Leibniz series, since there are powers of 3 in the denominators. The first 10 terms of it give us 5 accurate digits, and the first 100 terms give us 50. Leibniz himself used the first 22 terms to compute an approximation of pi correct to 11 decimal places as 3.14159265358.

However, mathematicians are still not satisfied with this efficiency. They kept substituting smaller x values into Eq. 4a to get more convergent series. Among them is Leonhard Euler, one of the greatest mathematicians in the 18th century. In his attempt to approximate \pi, Euler discovered the following non-intuitive formula:

Eq. 4c        \pi = 20 \arctan {1 \over 7} + 8 \arctan {3 \over 79}


Although Eq. 4c looks really weird, it is indeed an equality, not an approximation. The following hidden section shows how it is derived in detail:



Eq. 4c comes from the trigonometric identity of the tangent of two angles. Suppose we have 3 angles, \alpha, \beta, and \gamma that satisfy:


\gamma = \alpha - \beta


Then the trigonometric identity gives us:


\tan \gamma = \tan (\alpha - \beta) = {{\tan \alpha - \tan \beta} \over {1 + \tan \alpha \cdot \tan \beta}}


Let \tan \alpha = a , \tan \beta = b, and substitute into the equation above:


\tan \gamma = {{a - b} \over {1 + a \cdot b}} , or \gamma = \arctan {{a - b} \over {1 + a \cdot b}}


Recall that we have the relationship:


\alpha - \beta = \gamma


Change the angles into arctan functions:


\arctan(a)  - \arctan (b) = \arctan {{a - b} \over {1 + a \cdot b}}


If we move arctan(b) to the right side, we will get Euler's arctangent addition formula, which is the most important formula in this hidden section:


Eq. 4d        \arctan(a) = \arctan (b) + \arctan {{a - b} \over {1 + a \cdot b}}


What Eq. 4d does is that, it takes a large angle, arctan(a), and divides it into two smaller angles, as shown in Figure 4-c. From our previous discussion, we know that the series we use to estimate \pi gets more convergent when we plug in smaller angles. So this formula helps us to get more efficient algorithms.


Figure 4-cDividing an angle
Figure 4-c
Dividing an angle

Euler himself used this formula to get his algorithm for estimating \pi. He started from a simple fact:


Step 1        {\pi \over 4} = \arctan 1


To divide this angle into smaller angles, we can plug a = 1 and b = 1/2 into Eq. 4d:


\arctan 1 = \arctan {1 \over 2} + \arctan {1 \over 3}


So it turns out that the angle left is arctan (1/3). Substituting this into Step 1 yields:

Figure 4-dEuler's approximation of
Figure 4-d
Euler's approximation of \pi


Step 2        {\pi \over 4} = \arctan {1 \over 2} + \arctan {1 \over 3}


Next, let's focus on the angle arctan (1/2). Plug a = 1/2 and b = 1/3 into Eq. 4d:


\arctan {1 \over 2} = \arctan {1 \over 3} + \arctan {1 \over 7}


Substitute this into Step 2:


Step 3        {\pi \over 4} = 2\arctan {1 \over 3} + \arctan {1 \over 7}


We can keep doing this, using the Euler's arctangent addition formula to get smaller and smaller angles:


\arctan {1 \over 3} = \arctan {1 \over 7} + \arctan {2 \over 11} (a = 1/3 , b = 1/7)


Step 4        {\pi \over 4} = 3\arctan {1 \over 7} + 2\arctan {2 \over 11}


\arctan {2 \over 11} = \arctan {1 \over 7} + \arctan {3 \over 79} (a = 2/11 , b = 1/7)


Step 5        {\pi \over 4} = 5\arctan {1 \over 7} + 2\arctan {3 \over 79}


Here we have gotEq. 4c, the formula that Euler used to approximate \pi. Figure 4-d shows a graphic representation of these 5 steps.


We can certainly carry on to keep dividing it into even smaller angles, or try different values for a and b to get different series, but Euler stopped here because he thought these angles were small enough to give him an efficient algorithm.




The next step is to expand Eq. 4c using Taylor series, which allows us to do the numeric calculations:

\pi = 20 ({1 \over 7} - {1 \over 3 \cdot 7^3} + {1 \over 5 \cdot 7^5} - {1 \over 7 \cdot 7^7} \cdots)


+ 8 ({3 \over 79} - {3^3 \over 3 \cdot 79^3} + {3^5 \over 5 \cdot 79^5} - {3^7 \over 7 \cdot 79^7} \cdots)


This series converges so fast that each term of it gives more than 1 digit of \pi. Using this algorithm, it will not take more several days to calculate the first 35 digits of \pi with pencil and paper, which Ludolph spent most of his life on.

Although Euler himself has never undertaken the calculation, this idea was developed and used by many other mathematicians at his time. In 1789, the Slovene mathematician Jurij Vega calculated the first 140 decimal places for \pi of which the first 126 were correct. This record was broken in 1841, when William Rutherford calculated 208 decimal places with 152 correct ones. By the time of the invention of electronic digital computers, \pi had been expanded to more than 500 digits. And we shouldn't forget that all of these started from the Taylor series of trigonometric functions.

Acknowledgement: Most of the historical information in this section comes from these this article: click here[8].



Approximating e


The mathematical constant  e , approximately equal to 2.71828, is also called Euler's Number. This important constant appears in calculus, differential equations, complex numbers, and many other branches of mathematics. What's more, it's also widely used other subjects such as physics and engineering. So we would really like to know its exact value.

Figure 5-aDefinition of
Figure 5-a
Definition of e



Mathematically,  e is defined as:

 e = \lim_{n \to \infin} (1 + {1 \over n}) ^n


In principle, we could have approximated  e using this definition. However, this method is so slow and inefficient that we are forced to find another one. For example, set n to 100 in the definition, and we can get:

 e \approx (1 + {1 \over 100}) ^{100} = 2.70481 \cdots


which gives us only 2 accurate digits. This is really, really horrible accuracy for an approximating algorithm. So we have to find another way to do this.

One possible way is to use the Taylor series of function ex, which has a very nice property:

\frac{d}{dx} e^x = e^x


The proof of this property can be found in almost every calculus textbook. It tells us that all derivatives of the exponential function are equal:

 f(x) = f'(x) = f''(x) = f ^{(3)}(x) = \cdots = e^x


, and:

 f(0) = f'(0) = f''(0) = f ^{(3)}(0) = \cdots = 1


Substitute these derivatives into Eq. 2, the general formula of Taylor Series, we can get:

e^x = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} \cdots


Let x = 1, and we can get another way to approximate  e :

e = 1 + 1 + {1 \over 2!} + {1 \over 3!} + {1 \over 4!} + \cdots


This sequence is strongly convergent, since there are factorials in the denominators, and factorials grow really fast as n increases. Just take the first 10 terms and we can get:

e \approx 1 + 1 + {1 \over 2!} + {1 \over 3!} + {1 \over 4!} + \cdots + {1 \over 9!} = 2.718281801 \cdots


The real value of  e is 2.718281828··· , so we have got 7 accurate digits! Compared to the approximation by definition, which gives us only two digits at order 100, this algorithm is incredibly fast and efficient.

In fact, we can get the same conclusion if we plot the function ex and its two approximations together, and see which one converges faster. We already have the Taylor series approximation:

Figure 5-bTwo approximations of ex. Taylor series is much faster.
Figure 5-b
Two approximations of ex. Taylor series is much faster.


 e^x = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + \cdots + {x^n \over n!}


We can also find the powers of e using the definition:

 e^x = (\lim_{n \to \infin} (1 + {1 \over n}) ^{n})^x = \lim_{n \to \infin} (1 + {1 \over n}) ^{nx} = \lim_{n \to \infin} (1 + {x \over nx}) ^{nx} = \lim_{{n'} \to \infin} (1 + {x \over {n'}}) ^ {n'}


 = \lim_{{n} \to \infin} (1 + {x \over {n}}) ^ {n}


in which n' = n·x . We can switch between n' and n because both of the go to infinity, and which one we use doesn't matter.

In Figure 5-b, these two approximations are graphed together to approximate the original function ex. As we can see in the animation, Taylor series approximates the original function much faster than the definition does.




Teaching Materials

There are currently no teaching materials for this page. Add teaching materials.




References

  1. How does the calculator find values of sine, from homeschoolmath. This is an article about calculator programs for approximating functions.
  2. Calculator, from Wikipedia. This article explains the structure of an electronic calculator.
  3. The Harmonic Series Diverges Again and Again, by Steven J. Kifowit and Terra A. Stamps. This article explains why harmonic series is divergent.
  4. Harmonic Series, from Wolfram MathWorld. This is a simple proof that harmonic series diverges
  5. Pi, from Wolfram MathWorld. This article contains some history of Pi.
  6. Archimedes' Approximation of Pi. This is a thorough explanation of Archimedes' method.
  7. Digits of Pi, by Barry Cipra. Documentation of Ludolph's work is included here.
  8. How Euler Did It, by Ed Sandifer. This articles talks about Euler's algorithm for estimating π.





If you are able, please consider adding to or editing this page!

Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.






Personal tools