Taylor Series
From Math Images
Line 2:  Line 2:  
ImageName=Taylor Series  ImageName=Taylor Series  
Image=Taylor Main.gif  Image=Taylor Main.gif  
  ImageIntro=<br>A '''Taylor series''' is a power series representation of an [[Differentiabilityinfinitely differentiable]] function. In other words, certain functions, like the trigonometric functions, can be written as the sum of an infinite series. Taylor series, then, provide an alternative method of evaluating those functions  +  ImageIntro=<br>A '''Taylor series''' is a power series representation of an [[Differentiabilityinfinitely differentiable]] function. In other words, certain functions, like the trigonometric functions, can be written as the sum of an infinite series. Taylor series, then, provide an alternative method of evaluating those functions. 
<br>  <br>  
  :An ''n''<sup>th</sup>degree '''Taylor polynomial''' <math>P_n(x)</math> for a function approximates the value of the function around a certain point by evaluating only up to the ''n''<sup>th</sup>degree term. By doing so, we obtain a finite  +  :An ''n''<sup>th</sup>degree '''Taylor polynomial''' <math>P_n(x)</math> for a function approximates the value of the function around a certain point by evaluating only up to the ''n''<sup>th</sup>degree term of the Taylor series. By doing so, we obtain a finite series, which can be summed but will not exactly match the infinite Taylor series. In the animation on the right, successive Taylor polynomials are compared to the actual function ''y'' = sin(''x'') using the following polynomial expansion: 
<br>  <br>  
::<math>\sin(x) \approx P_n(x) = x  {x^3 \over 3!} + {x^5 \over 5!}  {x^7 \over 7!} + \cdots + (1)^{{n1}\over2} {x^n \over n!}</math>  ::<math>\sin(x) \approx P_n(x) = x  {x^3 \over 3!} + {x^5 \over 5!}  {x^7 \over 7!} + \cdots + (1)^{{n1}\over2} {x^n \over n!}</math>  
<br>  <br>  
  :In this example, ''n'' varies from 0 to 36. As we can see, as ''n'' becomes larger and there are more terms in the Taylor polynomial, the Taylor polynomial comes to "look" more like the original function; it becomes a progressively better approximation of the function sin(''x'').  +  :In this example, ''n'' varies from 0 to 36. As we can see, as ''n'' becomes larger and there are more terms in the Taylor polynomial, the Taylor polynomial comes to "look" more like the original function; it becomes a progressively better approximation of the function sin(''x''). If ''n'' were to go to infinity, the approximating polynomial would become identical to both the Taylor series and the original function ''y'' = sin(''x''). Since it is impossible to sum an infinite series, we settle for using a Taylor polynomial with finite ''n'' as an approximation. 
+  <br>  
+  In this page, we will focus on how such approximations might be obtained as well as how the error of such approximations might be bounded.  
<br>  <br>  
:For the math behind this, please go to the [[#MMEMore Mathematical Explanation]] section.  :For the math behind this, please go to the [[#MMEMore Mathematical Explanation]] section.  
ImageDescElem=<br>  ImageDescElem=<br>  
  +  Taylor series are important because they allow us to compute functions that cannot otherwise be computed. While the above Taylor polynomial for the sine function may seem complicated and may be annoying to evaluate, it is just the sum of terms composed of exponents and factorials, so the Taylor polynomial can be reduced to addition, subtraction, multiplication, and division. We can obtain an approximation by writing the function as a polynomial, which we know how to evaluate.  
<br>  <br>  
  +  Readers may already be familiar with a particular type of Taylor series. Consider, for instance, an infinite geometric series with common ratio ''x'':  
<br>  <br>  
  +  :<math>{1 \over {1x}} = 1 + x + x^2 + x^3 + \cdots</math> for <math>1 < x < 1</math>  
<br>  <br>  
  +  The right side of the equation is an infinite power series, so we have the Taylor series for <math>f (x) = {1 \over {1x}}</math>, the sum of the infinite geometric series.  
  +  
  +  
  +  
  +  
<br>  <br>  
  +  The [[MMEMore Mathematical Explanation]] will provide example of some other Taylor series, as well as a process for deriving them from the original functions.  
  +  
  +  
  +  
  +  
  +  
<br><br>  <br><br>  
  +  Using Taylor series, we can approximate infinitely differentiable functions. For example: <!choose an example for the geometric series>  
{{AnchorReference=Figure1bLink=[[Image:TaylorApproximate1ed.jpgrightthumb325pxFigure 1b<br>3termapproximation of function ''y'' = cos(''x'')<br>Click for an image of higher resolution]]}}  {{AnchorReference=Figure1bLink=[[Image:TaylorApproximate1ed.jpgrightthumb325pxFigure 1b<br>3termapproximation of function ''y'' = cos(''x'')<br>Click for an image of higher resolution]]}}  
{{AnchorReference=Figure1cLink=[[Image:TaylorApproximate2.jpgrightthumb325pxFigure 1c<br>The above approximation zoomed in 2,000 times<br>Click for an image of higher resolution]]}}  {{AnchorReference=Figure1cLink=[[Image:TaylorApproximate2.jpgrightthumb325pxFigure 1c<br>The above approximation zoomed in 2,000 times<br>Click for an image of higher resolution]]}}  
Line 56:  Line 49:  
<br><br>  <br><br>  
We can get the same conclusion if we graph the original cosine function and its approximation together as shown in [[#Figure1bFigure 1b]]. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line ''x'' = π/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when ''x'' grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in [[#Figure1cFigure 1c]], we can see that there is indeed a difference between these two functions, which we cannot see in a graph of normal scale.  We can get the same conclusion if we graph the original cosine function and its approximation together as shown in [[#Figure1bFigure 1b]]. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line ''x'' = π/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when ''x'' grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in [[#Figure1cFigure 1c]], we can see that there is indeed a difference between these two functions, which we cannot see in a graph of normal scale.  
  
  
<br><br>  <br><br>  
<div id = "MME"></div>  <div id = "MME"></div>  
Line 136:  Line 127:  
<br><br><br>  <br><br><br>  
==Finding a Taylor series for a specific function==  ==Finding a Taylor series for a specific function==  
  +  Many Taylor series can be derived using {{EquationNoteEq. 2}}  just substitute ''f'' and ''a'' into it, then compute the derivatives. Here we are going to do this in detail for the natural logarithm function. Other elementary functions, such as sin(''x''), cos(''x''), and ''e'' <sup>''x''</sup>, can be treated in a similar manner, and will also be provided.  
<br><br>  <br><br>  
  +  The natural log function is:  
<br><br>  <br><br>  
:<math>f (x) = \log (x)</math>  :<math>f (x) = \log (x)</math>  
Line 163:  Line 154:  
This leads us to the discussion of convergent and divergent sequences in the next subsection.  This leads us to the discussion of convergent and divergent sequences in the next subsection.  
<br><br><br>  <br><br><br>  
+  Here we will give some examples of Taylor series without explanation. The following may be derived through the process explained in the [[#MMEMore Mathematical Explanation]]:  
+  <br><br><br>  
+  <div id = "Taylorexample">  
+  :<math>\sin (x) = x  {x^3 \over 3!} + {x^5 \over 5!}  {x^7 \over 7!} + {x^9 \over 9!} \cdots</math> , expanded around the origin. ''x'' is in [[Radiansradians]].  
+  <br>  
+  :<math>\cos (x) = 1  {x^2 \over 2!} + {x^4 \over 4!}  {x^6 \over 6!} + {x^8 \over 8!} \cdots</math> , expanded around the origin. ''x'' is in [[Radiansradians]].  
+  <br>  
+  :<math>e^x = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + {x^5 \over 5!} \cdots</math> , expanded around the origin. ''e'' is [http://en.wikipedia.org/wiki/E_(mathematical_constant) Euler's number], approximately equal to 2.71828···  
+  <br>  
+  :<math>\log (1+x) = x  {x^2 \over 2} + {x^3 \over 3}  {x^4 \over 4} \cdots</math> , expanded around the origin for <math>x \leq 1</math> and <math>x \neq 0</math>.  
+  </div>  
==To converge or not to converge, this is the question==  ==To converge or not to converge, this is the question==  
<br>  <br>  
Line 253:  Line 255:  
The answer to this latter question is yes. There are algorithms that give an approximate value of sine, for example, using only the four basic operations (+, , x, /)<ref name = ref1>[http://www.homeschoolmath.net/teaching/sine_calculator.php How does the calculator find values of sine], from homeschoolmath. This is an article about calculator programs for approximating functions.</ref>. Mathematicians studied these algorithms in order to approximate these functions manually before the age of electronic calculators. One such algorithm is given by the '''Taylor series''', named after English mathematician Brook Taylor. Basically, Taylor said that there is a way to expand any [[Differentiabilityinfinitely differentiable]] function into a polynomial series about a certain point. The power of the Taylor series is to approximate certain functions that cannot otherwise be calculated.  The answer to this latter question is yes. There are algorithms that give an approximate value of sine, for example, using only the four basic operations (+, , x, /)<ref name = ref1>[http://www.homeschoolmath.net/teaching/sine_calculator.php How does the calculator find values of sine], from homeschoolmath. This is an article about calculator programs for approximating functions.</ref>. Mathematicians studied these algorithms in order to approximate these functions manually before the age of electronic calculators. One such algorithm is given by the '''Taylor series''', named after English mathematician Brook Taylor. Basically, Taylor said that there is a way to expand any [[Differentiabilityinfinitely differentiable]] function into a polynomial series about a certain point. The power of the Taylor series is to approximate certain functions that cannot otherwise be calculated.  
<br>  <br>  
+  The calculator's algorithm uses this method to efficiently find a suitable approximation for polynomial series. This algorithm is built in the permanent memory (ROM) of electronic calculators, and is triggered every time we enter the function<ref name = ref2>[http://en.wikipedia.org/wiki/Calculator Calculator], from Wikipedia. This article explains the structure of an electronic calculator.</ref>.  
+  <br><br>  
As we have stated before, Taylor series can be used to derive many interesting sequences. Some of these sequences have helped mathematicians to approximate the values of important irrational constants such as <math>\pi</math> and <math>e</math>.  As we have stated before, Taylor series can be used to derive many interesting sequences. Some of these sequences have helped mathematicians to approximate the values of important irrational constants such as <math>\pi</math> and <math>e</math>.  
<br><br>  <br><br> 
Revision as of 11:24, 23 May 2013
Taylor Series 

Taylor Series
A Taylor series is a power series representation of an infinitely differentiable function. In other words, certain functions, like the trigonometric functions, can be written as the sum of an infinite series. Taylor series, then, provide an alternative method of evaluating those functions.
 An n^{th}degree Taylor polynomial for a function approximates the value of the function around a certain point by evaluating only up to the n^{th}degree term of the Taylor series. By doing so, we obtain a finite series, which can be summed but will not exactly match the infinite Taylor series. In the animation on the right, successive Taylor polynomials are compared to the actual function y = sin(x) using the following polynomial expansion:
 In this example, n varies from 0 to 36. As we can see, as n becomes larger and there are more terms in the Taylor polynomial, the Taylor polynomial comes to "look" more like the original function; it becomes a progressively better approximation of the function sin(x). If n were to go to infinity, the approximating polynomial would become identical to both the Taylor series and the original function y = sin(x). Since it is impossible to sum an infinite series, we settle for using a Taylor polynomial with finite n as an approximation.
In this page, we will focus on how such approximations might be obtained as well as how the error of such approximations might be bounded.
 For the math behind this, please go to the More Mathematical Explanation section.
Contents 
Basic Description
Taylor series are important because they allow us to compute functions that cannot otherwise be computed. While the above Taylor polynomial for the sine function may seem complicated and may be annoying to evaluate, it is just the sum of terms composed of exponents and factorials, so the Taylor polynomial can be reduced to addition, subtraction, multiplication, and division. We can obtain an approximation by writing the function as a polynomial, which we know how to evaluate.
Readers may already be familiar with a particular type of Taylor series. Consider, for instance, an infinite geometric series with common ratio x:
 for
The right side of the equation is an infinite power series, so we have the Taylor series for , the sum of the infinite geometric series.
The More Mathematical Explanation will provide example of some other Taylor series, as well as a process for deriving them from the original functions.
Using Taylor series, we can approximate infinitely differentiable functions. For example:
First we must convert degrees to radians in order to use the Taylor series:
Then, substitute into the Taylor series of cosine above:
Here we only used 3 terms, since this should be enough to tell us something. Notice that the right side of the equation above involves only the four simple operations, so we can easily calculate its value:
On the other hand, trigonometry gives us the exact numerical value of this particular cosine:
So our approximating value agrees with the actual value to the fourth decimal, which is good accuracy for a basic approximation. Better accuracy can be achieved by using more terms in the Taylor series.
We can get the same conclusion if we graph the original cosine function and its approximation together as shown in Figure 1b. We can see that the original function and the approximating Taylor series are almost identical when x is small. In particular, the line x = π/6 cuts the two graphs almost simultaneously, so there is not much difference between the exact value and the approximating value. However, this doesn't mean that these two functions are exactly the same. For example, when x grows larger, they start to deviate significantly from each other. What's more, if we zoom in the graph at the intersection point, as shown in Figure 1c, we can see that there is indeed a difference between these two functions, which we cannot see in a graph of normal scale.
A More Mathematical Explanation
 Note: understanding of this explanation requires: *Calculus
The General Form of a Taylor Series
In this subsection, we will derive a general formula for a function's Taylor series using the derivatives a general function f(x). Taylor polynomials are defined as follows:
 The Taylor polynomial of degree n for f at a, written as , is the polynomial that has the same 0^{th} to n^{th}order derivatives as function f(x) at point a. In other words, the n^{th}degree Taylor polynomial must satisfy:
 (the 0^{th}order derivative of a function is itself)
 in which is the k^{th}order derivative of both and at a.
 The Taylor series is just with infinitely large degree n. Notice that f must be infinitely differentiable in order to have a Taylor series; every derivative of f must exist and be equal to that derivative of .
The following set of images show some examples of Taylor polynomials, from 0^{th} to 2^{nd}order:



From the definition above, the function f and its 0^{th}order Taylor polynomial must have the same 0^{th}order derivatives at a. Since the 0^{th}order derivative of a function is just itself by definition, we have:
which gives us the horizontal line shown in Figure 2a. This is certainly not a very close approximation. So we need to add more terms.
The first order Taylor polynomial must satisfy:
which gives us the linear approximation shown in Figure 2b. This approximation is better than the 0^{th}order one, but still not very usable far from a.
Similarly, the second degree Taylor polynomial must satisfy:
which gives us the quadratic approximation shown in Figure 2c. This approximation is better still.
As we can see, the quality of our approximation increases as we add more terms to the Taylor polynomial. Since Taylor series is the Taylor polynomial of infinitely large degree, it should be a perfect approximation  identical to the original function at every point.
Taylor proved that such a series must exist for every infinitely differentiable function f. In fact, without loss of generality, using the definition of a power series, we can write the Taylor series of a function f around a as
in which a_{0}, a_{1}, a_{2} ... are unknown coefficients. Our goal is to find these coefficients. From the definition of Taylor polynomials, we know that function f and Taylor series must have same derivatives of all degrees:
 , , ,
Using the constraints above, we can determine the value of all unknown coefficients in Eq. 1. Just consider of Eq. 1 and we get:
Note that the Taylor series approximates an infinitely differentiable function by exploiting qualities of the differentiability of polynomials. With each derivative, the constant term goes to 0, and every other term, its order being decreased by one, is multiplied by its previous order. The terms before a_{k} "vanish" because their associated power of (x  a ) doesn't "survive" taking k derivatives. The terms after a_{k} vanish because they still contain (x  a ) to some power, so when evaluated at x = a, they go to 0. So we are left with this simple equation, from which we can directly get:
Since 0! = 1, then this formula holds for all nonnegative integers n from 0 to infinity. So, using derivatives, we have obtained an expression for all unknown coefficients of the given function f. Substitute them back into Eq. 1 to get an explicit expression of Taylor series:
or,
This is the standard formula of Taylor series that we will use throughout the rest of this page. In many cases, it is convenient to let a = 0 to get a neater expression:
Eq. 3 is called the Maclaurin series after Scottish mathematician Colin Maclaurin, who made extensive use of these series in the 18th century.
Finding a Taylor series for a specific function
Many Taylor series can be derived using Eq. 2  just substitute f and a into it, then compute the derivatives. Here we are going to do this in detail for the natural logarithm function. Other elementary functions, such as sin(x), cos(x), and e ^{x}, can be treated in a similar manner, and will also be provided.
The natural log function is:
Its derivatives are:
 , ,
Since this function and its derivatives are not defined at x = 0, we cannot use Maclaurin series for it. Instead we can let a = 1 and compute the derivatives at this point:
 , , ,
Substitute these derivatives into Eq. 2, and we can get the Taylor series for centered at x = 1:
What's more, we can avoid the cumbersome (x  1)^{k} notation by introducing a new function g(x) = log (1 + x). Now we can expand it around x = 0:
The animation to the right shows this Taylor polynomial with degree n varying from 0 to 25. As we can see, the left part of this polynomial soon approximates the original function as we have expected. However, the right part demonstrates some strange behavior: it seems to diverge farther away from the function as n grows larger. This tells us that Taylor series is not always a reliable approximation of the original function. Just the fact that they have same derivatives doesn't guarantee they are the same thing. There are more requirements needed.
This leads us to the discussion of convergent and divergent sequences in the next subsection.
Here we will give some examples of Taylor series without explanation. The following may be derived through the process explained in the More Mathematical Explanation:
 , expanded around the origin. x is in radians.
 , expanded around the origin. x is in radians.
 , expanded around the origin. e is Euler's number, approximately equal to 2.71828···
 , expanded around the origin for and .
To converge or not to converge, this is the question
From the last example of natural log, we can see that sometimes Taylor series fail to approximate their original functions. This happens because the Taylor series for natural log is divergent when , while a valid polynomial approximation needs to be convergent. Here are the definitions of convergence and divergence:
 Let our infinite sequence be:
 ,
 and define its sum series to be:
 The sequence is said to be convergent if the following limit exists:
 If this limit doesn't exist, then the series is said to be divergent.
As we can see in the definition, whether a sequence is convergent or not depends on its sum series. If the sequence is "summable" when n goes to infinity, then its convergent. If it's not, then it's divergent. Following are some examples of convergent and divergent sequences:
 , convergent.
 , convergent.
 , divergent. Vibrates above and below 0 with increasing magnitudes.
 , divergent. Adds up to infinity.
Seq. 1 comes directly from the summation formula of geometric sequences. Seq. 2 is a famous summable sequence discovered by Leibniz. We are going to briefly explain these sequences in the following sections.
Seq. 3 and Seq. 4 are divergent because both of them add up to infinity. However, there is one important difference between them. On one hand, Seq. 3 has terms going to infinity, so it's not surprising that this one is not summable. On the other hand, Seq. 4 has terms going to zero, but they still have an infinitely large sum! This counterintuitive result was first proved by Johann Bernoulli and Jacob Bernoulli in 17th century. In fact, this sequence is so epic in the history of math that mathematicians gave it a special name: the harmonic series. Click here for a proof of the divergence of harmonic series^{[1]}.
By definition, divergent series are not summable. So if we talk about the "sum" of these series, we may get ridiculous results. For example, look at the summation formula of geometric series:
This formula could be easily derived with a little manipulation of algebra, or by expanding the Maclaurin series of the left side. Click here for a simple proof^{[2]}. However, what we want to show here is that this formula doesn't work for all values of r. For values less than 1, such as 1/2, we can get reasonable results like:
However, if the value of r is larger than 1, such as 2, things start to get weird:
How can we get a negative number by adding a bunch of positive integers? Well, if this case makes mathematicians uncomfortable, then they are going to be even more puzzled by the following one, in which r = 2:
This is ridiculous: the sum of integers can not possibly be a fraction. In fact, we are getting all these funny results because the last two series are divergent, so their sums are not defined. See the following images for a graphic representation of these series:



In the images above, the blue lines trace the geometric sequences, and the red lines trace their sum series. As we can see, the first sequence with r = 1/2 does have a limited sum, since its sum series converge to a finite value as n increases. However, the sum series of the other two sequences don't converge to anything. They never settle around a finite value. Thus the second and third sequences diverge, and their sums don't exist. Although we can still write down the summation formula in principle, this formula is meaningless. So no wonder we have got those weird results.
Same thing happens in the Taylor series of natural log:
Let's look at an arbitrary term in this series: ±x^{n} / n. As n increases, the denominator is experiencing a linear growth, and the numerator is experiencing an exponential growth. It is a known fact that exponential growth will eventually override linear growth, as long as the absolute value of x is larger than one. So if x > 1, then the terms x^{n} / n will go to infinity, and this Taylor series will be divergent. This is why we saw the abnormal behavior of the right side of Figure 2d. In this "divergent zone", although we can still write down the polynomial, it's no longer a valid approximation of the function. For example, if we want to calculate the value of log 4, instead of writing:
 (divergent)
we have to write:
 (convergent)
in which we saved it from the "divergent zone" to the "convergent zone" by using the identity log(a ·b ) = log (a ) + log (b ).
Why It's Interesting
Have you ever wondered how calculators determine square roots, sines, cosines, and exponentials? For instance, if you were to type or into your calculator, how does it determine which value to spit out? This number must be related to our input in some way, but what exactly is the relationship? Does the calculator just read from an index of known values? Is there a more mathematical and precise way for the calculator to evaluate these functions?
The answer to this latter question is yes. There are algorithms that give an approximate value of sine, for example, using only the four basic operations (+, , x, /)^{[3]}. Mathematicians studied these algorithms in order to approximate these functions manually before the age of electronic calculators. One such algorithm is given by the Taylor series, named after English mathematician Brook Taylor. Basically, Taylor said that there is a way to expand any infinitely differentiable function into a polynomial series about a certain point. The power of the Taylor series is to approximate certain functions that cannot otherwise be calculated.
The calculator's algorithm uses this method to efficiently find a suitable approximation for polynomial series. This algorithm is built in the permanent memory (ROM) of electronic calculators, and is triggered every time we enter the function^{[4]}.
As we have stated before, Taylor series can be used to derive many interesting sequences. Some of these sequences have helped mathematicians to approximate the values of important irrational constants such as and .
Approximating pi
, or the ratio of a circle's circumference to its diameter, is one of the oldest, most important, and most interesting mathematical constants. The earliest documentation of can be traced back to ancient Egypt and Babylon, in which people used empirical values of such as 25/8 = 3.1250, or (16/9)^{2} ≈ 3.1605^{[5]}.
The first recorded algorithm for rigorously calculating the value of was a geometrical approach using polygons, devised around 250 BC by the Greek mathematician Archimedes. Archimedes computed upper and lower bounds of by drawing regular polygons inside and outside a circle, and calculating the perimeters of the outer and inner polygons. He proved that 223/71 < < 22/7 by using a 96sided polygon, which gives us 2 accurate decimal digits: π ≈ 3.14^{[6]}.
Mathematicians continued to use this polygon method for the next 1,800 years. The more sides their polygons have, the more accurate their approximations would be. This approach peaked at around 1600, when the Dutch mathematician Ludolph van Ceulen used a 2^{60}  sided polygon to obtain the first 35 digits of ^{[7]}. He spent a major part of his life on this calculation. In memory of his contribution, sometimes is still called "the Ludolphine number".
However, mathematicians have had enough of trillionsided polygons. Starting from the 17^{th} century, they devised much better approaches for computing , using calculus rather than geometry. Mathematicians discovered numerous infinite series associated with , and the most famous one among them is the Leibniz series:
We have seen the Leibniz series as an example of convergent series in the More Mathematical Explanation section. Here we are going to briefly explain how Leibniz got this result. This amazing sequence comes directly from the Taylor series of arctan(x):
We can get Eq. 4a by directly computing the derivatives of all orders for arctan(x) at x = 0, but the calculation involved is rather complicated. There is a much easier way to do this if we notice the following fact:
Recall that we gave the summation formula of geometric series in the More Mathematical Explanation section :
 ,
If we substitute r =  x^{2} into the summation formula above, we can expand the right side of Eq. 4b into an infinite sequence:
So Eq. 4b changes into:
Integrating both sides gives us:
Let x = 0, this equation changes into 0 = C . So the integrating constant C vanishes, and we get Eq. 4a.
One may notice that, like Taylor series of many other functions, this series is not convergent for all values of x. It only converges for 1 ≤ x ≤ 1. Fortunately, this is just enough for us to proceed. Substituting x = 1 into it, we can get the Leibniz series:
The Leibniz series gives us a radically improved way to approximate : no polygons, no square roots, just the four basic operations. However, this particular series is not suitable for computing , since it converges too slowly. The first 1,000 terms of Leibniz series give us only two accurate digits: π ≈ 3.14. This is horribly inefficient, and no mathematicians will ever want to use this algorithm.
Fortunately, we can get series that converge much faster if we substitute smaller values of x , such as , into Eq. 4a:
which gives us:
This series is much more efficient than the Leibniz series, since there are powers of 3 in the denominators. The first 10 terms of it give us 5 accurate digits, and the first 100 terms give us 50. Leibniz himself used the first 22 terms to compute an approximation of pi correct to 11 decimal places as 3.14159265358.
However, mathematicians are still not satisfied with this efficiency. They kept substituting smaller x values into Eq. 4a to get more convergent series. Among them is Leonhard Euler, one of the greatest mathematicians in the 18^{th} century. In his attempt to approximate , Euler discovered the following nonintuitive formula:
Although Eq. 4c looks really weird, it is indeed an equality, not an approximation. The following hidden section shows how it is derived in detail:
The next step is to expand Eq. 4c using Taylor series, which allows us to do the numeric calculations:
This series converges so fast that each term of it gives more than 1 digit of . Using this algorithm, it will not take more several days to calculate the first 35 digits of with pencil and paper, which Ludolph spent most of his life on.
Although Euler himself has never undertaken the calculation, this idea was developed and used by many other mathematicians at his time. In 1789, the Slovene mathematician Jurij Vega calculated the first 140 decimal places for of which the first 126 were correct. This record was broken in 1841, when William Rutherford calculated 208 decimal places with 152 correct ones. By the time of the invention of electronic digital computers, had been expanded to more than 500 digits. And we shouldn't forget that all of these started from the Taylor series of trigonometric functions.
Acknowledgement: Most of the historical information in this section comes from these this article: click here^{[8]}.
Approximating e
The mathematical constant , approximately equal to 2.71828, is also called Euler's Number. This important constant appears in calculus, differential equations, complex numbers, and many other branches of mathematics. What's more, it's also widely used other subjects such as physics and engineering. So we would really like to know its exact value.
Mathematically, is defined as:
In principle, we could have approximated using this definition. However, this method is so slow and inefficient that we are forced to find another one. For example, set n to 100 in the definition, and we can get:
which gives us only 2 accurate digits. This is really, really horrible accuracy for an approximating algorithm. So we have to find another way to do this.
One possible way is to use the Taylor series of function e^{x}, which has a very nice property:
The proof of this property can be found in almost every calculus textbook. It tells us that all derivatives of the exponential function are equal:
, and:
Substitute these derivatives into Eq. 2, the general formula of Taylor Series, we can get:
Let x = 1, and we can get another way to approximate :
This sequence is strongly convergent, since there are factorials in the denominators, and factorials grow really fast as n increases. Just take the first 10 terms and we can get:
The real value of is 2.718281828··· , so we have got 7 accurate digits! Compared to the approximation by definition, which gives us only two digits at order 100, this algorithm is incredibly fast and efficient.
In fact, we can get the same conclusion if we plot the function e^{x} and its two approximations together, and see which one converges faster. We already have the Taylor series approximation:
We can also find the powers of using the definition:
in which n' = n·x . We can switch between n' and n because both of the go to infinity, and which one we use doesn't matter.
In Figure 5b, these two approximations are graphed together to approximate the original function e^{x}. As we can see in the animation, Taylor series approximates the original function much faster than the definition does.
Teaching Materials
 There are currently no teaching materials for this page. Add teaching materials.
References
 ↑ The Harmonic Series Diverges Again and Again, by Steven J. Kifowit and Terra A. Stamps. This article explains why harmonic series is divergent.
 ↑ Harmonic Series, from Wolfram MathWorld. This is a simple proof that harmonic series diverges
 ↑ How does the calculator find values of sine, from homeschoolmath. This is an article about calculator programs for approximating functions.
 ↑ Calculator, from Wikipedia. This article explains the structure of an electronic calculator.
 ↑ Pi, from Wolfram MathWorld. This article contains some history of Pi.
 ↑ Archimedes' Approximation of Pi. This is a thorough explanation of Archimedes' method.
 ↑ Digits of Pi, by Barry Cipra. Documentation of Ludolph's work is included here.
 ↑ How Euler Did It, by Ed Sandifer. This articles talks about Euler's algorithm for estimating π.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.