Convergence
From Math Images
(→Integral Test) 
(→Alternating Series Test) 

Line 215:  Line 215:  
:(Note that, in the way that ''s'' is written here, <math>a_k</math> is positive for all ''k'', and the negative terms are "handled" by the negative signs "outside" of the coefficients. We treat all coefficients as positive and keep in mind that we are working with a series with alternating terms.)  :(Note that, in the way that ''s'' is written here, <math>a_k</math> is positive for all ''k'', and the negative terms are "handled" by the negative signs "outside" of the coefficients. We treat all coefficients as positive and keep in mind that we are working with a series with alternating terms.)  
  :If <math>0  +  :If <math>0 \leq a_{n+1} \leq a_n</math> for all ''n'', and <math>\lim_{n \rightarrow \infty} a_n = 0</math>, then ''s'' converges. 
As the ''n''<sup>th</sup>term test showed, having the ''n''<sup>th</sup> term go to 0 does not necessarily prove that a series converges. However, if the series is an ''alternating'' series, then it does prove convergence. Why this exception?  As the ''n''<sup>th</sup>term test showed, having the ''n''<sup>th</sup> term go to 0 does not necessarily prove that a series converges. However, if the series is an ''alternating'' series, then it does prove convergence. Why this exception?  
Line 255:  Line 255:  
{{SwitchPreviewHideMessage=Hide explanation of alternating series test.ShowMessage=Show explanation of alternating series test.  {{SwitchPreviewHideMessage=Hide explanation of alternating series test.ShowMessage=Show explanation of alternating series test.  
  PreviewText=...FullText=}}  +  PreviewText=Recall that we have written our alternating series in the form...FullText=Recall that we have written our alternating series in the form: 
+  
+  :<math>s = a_1  a_2 + a_3  a_4 + a_5  \cdots</math> where <math>a_k \geq 0</math>.  
+  
+  We assume:  
+  #<math>0 < a_{n+1} < a_n</math> for all ''n'', and  
+  #<math>\lim_{n \rightarrow \infty} a_n = 0</math>.  
+  
+  We must show that ''s'' converges.  
+  
+  By 1. above, it follows:  
+  
+  :<math> a_n  a_{n+1} \geq 0</math> for all ''n''.  
+  
+  We observe the following even partial sums:  
+  
+  :<math>s_2 = a_1  a_2 \geq 0</math>  
+  :<math>s_4 = a_1  a_2 + a_3  a_4 = s_2 + a_3  a_4 \geq s_2</math>  
+  :<math>s_6 = a_1  a_2 + a_3  a_4 + a_5  a_6 = s_4 + a_5  a_6 \geq s_4</math>.  
+  
+  Each even (2''n''<sup>th</sup>) partial sum is the previous ''even'' partial sum, plus the following odd and minus the following even terms. By what was established previously, each even partial sum is also greater than the previous even partial sum. We mathematize these generalizations:  
+  
+  :<math>s_{2n} = s_{2n  2} + a_{2n  1}  a_{2n} \geq s_{2n  2}</math>.  
+  
+  This demonstrates that <math>s_{2n}</math> is increasing.  
+  
+  Let's expand <math>s_{2n}</math> fully:  
+  
+  :<math>s_{2n} = a_1  a_2 + a_3  a_4 + a_5  \cdots  a_{2n2} + a_{2n  1}  a_{2n}</math>.  
+  
+  We can rewrite and group these terms alternatively:  
+  
+  :<math>s_{2n} = a_1  (a_2  a_3)  (a_4  a_5)  \cdots  (a_{2n2}  a_{2n  1})  a_{2n}</math>.  
+  
+  As we've shown, <math>(a_2  a_3), (a_4  a_5), \cdots, (a_{2n2}  a_{2n1})</math> are all positive quantities. Therefore, <math>s_{2n} \leq a_1</math> (it is <math>a_1</math> minus only positive values).  
+  
+  Since <math>s_{2n}</math> is ''increasing'' and ''bounded above'' by <math>a_1</math>, <math>s_{2n}</math> converges (we will call the value it converges to ''L''). It remains to be shown that the odd case, <math>s_{2n + 1}</math> also converges to ''L''. This is relatively easy to show, however:  
+  
+  :<math>\lim _{n \rightarrow \infty} s_{2n+1} = \lim _{n \rightarrow \infty} (s_{2n} + a_{2n + 1}) = \lim _{n \rightarrow \infty} s_{2n} + \lim _{n \rightarrow \infty} a_{2n + 1}</math>.  
+  
+  By 1. above, the second limit goes to 0 as ''n'' goes to infinity. We are left with the first limit, which as we found earlier converges to ''L''. Since the series consisting of both even and odd numbers of terms both converge to the same value, the series as a whole converges as well.  
+  }}  
===Comparison Test===  ===Comparison Test=== 
Revision as of 15:21, 19 July 2013
This is a Helper Page for:


Taylor Series 
A sequence is a finite or infinite ordered list of objects, called terms. A series is the sum of the terms in a sequence. This page will explore and clarify what it means for infinite sequences and series to converge.
Contents 
Notation for Sequences and Series
Let us introduce some formal notation to accompany the definitions of sequence and series given above. The general form of an infinite sequence is a list of terms:
 .
We call the n^{th} term of the series .
We can write an infinite series corresponding to A. The series is also given in summation notation:
 .
We call the finite sum series of the first n terms :
 .
We can also think of a series in terms of its sequence of partial sums, which is a sequence in which each term is the n^{th} sum series of another series:
 .
In the above notations, lowercase variables represent series and uppercase variables represent sequences. The sequence of partial sums is a sequence in which each term is the sum of a series. This can be confusing but will help us in defining convergence below.
It is legitimate to denote the first term of a sequence as . This is a cosmetic decision and will not affect the result of whether the series converges or diverges, or what it converges to, although it can change some of the intermediary math and the formulas that we come up with. On this page, we will use the above convention for the first term of a sequence.
Convergence and Divergence
Generally speaking, a sequence or series converges if it "tends to" a single value as the value of n, the number of terms, increases. However, this general characterization takes on slightly different meaning depending on whether we are referring to a sequence or a series. In a convergent sequence, the term approaches a single number for arbitrarily large n. In a convergent series, the sum of all of the terms up to approaches a single number for arbitrarily large n. The value to which a series converges is also called its sum.
This page will focus on the convergence of series and sequences consisting of real numbers. The same principles are often applicable in other systems, like the complex numbers.
We will use the notations laid out above to define convergence more formally using limits.
The sequence is said to be convergent if the following limit exists:
 , where .
If this limit does not exist, then the sequence is said to be divergent.
When a limit does not exist and is not bounded above (or below), as in the case that it increases (or decreases) without bound, we will sometimes say that the limit is . This does not mean that the limit exists, but we mean to distinguish it from a limit that does not exist simply because it does not tend to a single number (for example, in the case of the sequence ).
We said that the sums up to must approach a single number for a series to converge. In other terms,
 , where .
Again, if the limit does not exist, then the series is said to be divergent. The sum of terms in the parentheses is the righthand side of Eq. 3, and so is equal to . Substituting, we get:
 .
Recall that n^{th} term of a sequence of partial sums is the n^{th} sum series for a sequence. Then we have concluded that we can think of the convergence of a series in terms of the convergence of the sequence of its partial sums. How does this work?
Say we are given the infinite series...
Say we are given the infinite series:
Some might recognize this as an infinite geometric series with first term and common ratio . The series will have n^{th} term . We know, then, that the sum of the series is:
 .
The series converges to 2. Can we also use the criteria established above to find this sum? As we said, we want to examine the limit of the sequence of partial sums. We will do so using Eq. 5.
The first term of the sequence of partial sums is :
 .
The next term is :
 .
And so on:
The denominator is a power of 2 (ie. 2, 4, 8, 16,...); the numerator is double that power of 2, minus 1 (ie. 3, 7, 15, 31,...). We can characterize this in general, in terms of n,
 .
(is it necessary/worth it to prove this by induction?)
Now that we have an expression for in terms of n, we can plug it into Eq. 5 and determine what the series converges to:
Since the expression goes to 0 as n increases without bound, the above limit is 2. confirming the result of the formula for the sum of an infinite geometric series. This result should make sense; looking to Figure A, we see the sums are bounded above and get close to 2 pretty quickly, but never actually seem to reach 2.
In the last example, we were able to find the sum of the geometric series before considering the n^{th} partial sum. This is not always possible, however, since most series will not come with a formula to quickly give us their sum. However, if we can find an expression for the n^{th} partial sum in terms of n, then we can still figure out whether the series converges, and often we can figure out to what value it converges. Consider the following series:
 .
This is not a geometric series. We don't have a formula for computing its sum. However, we can still look at the sequence of partial sums:
In general, then,
 .
We now can observe the limit using Eq. 5:
 .
Therefore, converges to 1.
There is another "type" of convergence that we need to define before proceeding. Suppose that we have a series . If the series
converges, then s is said to be absolutely convergent; that is, the series consisting of the absolute values of each term of s converges.
Furthermore, we establish the following:
 Theorem: If a series converges absolutely, then it also converges conditionally; that is, if any of its terms were to be made negative. (Conditionally convergent is essentially a synonym for convergent as we defined it above.)
This should make sense. If terms, for instance, alternate between positive and negative and become progressively smaller in absolute value (tending to 0 as n goes to infinity), then one might suppose that they will eventually "hone in" on a single value, especially if the terms would converge if they were all positive. We'll look at this in more detail in the alternating series test.
This seems like a trivial thing to observe, but it is important. As we'll see, some of the tests below (the comparison test and the limit test) require that all terms in a series be nonnegative. This does not mean, necessarily, that these tests cannot be used to demonstrate that a series with some negative terms converges. If we can make each term positive and prove that the resulting series converges, then the original series is absolutely convergent and therefore convergent.
The alternating harmonic series is an interesting example of a convergent, but not absolutely convergent, series, which will be discussed later.
Convergence Tests
There is a problem with the above method for establishing convergence: we cannot write the n^{th} partial sum for every series. What do we do in such cases? Finding the sum of a series may not always be possible, but we can at least figure out whether the series converges or diverges. Here we will lay out several tests to check whether a series converges or diverges.
We will pay particular attention to when these tests are inconclusive. A particular test will not necessarily tell us whether or not a series converges. Some will get us an answer more easily than others. Some are not actually convergence tests, but are divergence tests. It is important not to forget what each test is capable of proving.
Before proceeding, there is a phrase which we will find useful to define: A statement holds for sufficiently large n if there exists some N such that the statement holds for all .
In other words, a statement may not hold for n = 3 or n = 5 or even n = 100, but it can still hold for "sufficiently large" n. When talking about convergence of infinite series, though, what we often care about is not whether a statement holds "for all" n but whether we can increase n enough so that it holds for all values of n that are larger than some value (N). This idea will come up repeatedly, especially in the proofs of the following convergence tests. Alternatively, we might simply say for large n, indicating that while the statement may not be true for all n, it will hold if we increase n enough.
n^{th}term Test
The n^{th}term Test can be used to quickly establish whether or not a series diverges.
 Theorem: If the n^{th} term of a series does not approach 0 as n approaches infinity, then the series diverges. In other words, if
 ,
 then diverges.
The n^{th}term test is inconclusive when the n^{th} term does approach 0 as n approaches infinity. A series can "pass" the n^{th} term test and still diverge, so passing the test does not prove convergence. The test's utility is in weeding out some divergent series, but not all.
Geometric series only converge when the common ratio is between 1 and 1. Consider the geometric series with first term 1 and common ratio 1...
Geometric series only converge when the common ratio is between 1 and 1. Consider the geometric series with first term 1 and common ratio 1:
 .
The limit of the n^{th} term as n goes to infinity is not 0, so the series does not converge.
However, the n^{th}term test is not always conclusive. Consider the harmonic series, which does not converge:
 .
We will examine the Harmonic series in greater detail below, but for now it will suffice to note that
 ,
even though the series diverges. We can use the n^{th}term test to find some divergent series, but it will not find all divergent series. Series whose n^{th} term goes to 0 may be convergent or divergent; series whose n^{th} term does not go to 0 are definitely divergent.
The n^{th}term test is fairly intuitive. Here we will prove it by refining our understanding of limits...
The n^{th}term test is fairly intuitive. Here we will prove it more rigorously by refining our understanding of convergence and limits.
 Definition: A series converges if, for every , there exists some N such that, for all , it holds that
 .
In other words, for any positive value (which we call , the degree of tolerance), no matter how small, we must be able to find a sufficiently large term, the sum of any number of terms after which is less than . If we can do this, then the limit exists. If there is some that is smaller than the terms after any N we might choose, then the series diverges. This level of specificity might seem unnecessary, but it will help us clarify this proof.
Consider a series and its corresponding sequence of its terms: and .
We will assume that A converges to some nonzero value (this does not mean that A has all nonzero terms, just that it does not converge to 0) and show that s necessarily diverges.
For the purpose of this proof, we will denote the n^{th} partial sum of s as . Furthermore, note that, for all n, we can obtain any subsequent sum by adding the subsequent term:
 .
In order for s to converge, it must be possible to select a large enough value of n so that can be made smaller than any positive number . However, if we take the limit of the above equation, we get:
 .
Since we have assumed that
 ,
there exists some such that
 .
In other words, because the limit of the n^{th} term is not 0, there exists some positive number which the difference between two successive terms, for arbitrarily large n, cannot be less than. Therefore the series cannot converge.
Alternating Series Test
The alternating series test can be used to determine whether or not series with terms alternating between positive and negative converge.
 Theorem: Consider a series with alternating terms:
 .
 (Note that, in the way that s is written here, is positive for all k, and the negative terms are "handled" by the negative signs "outside" of the coefficients. We treat all coefficients as positive and keep in mind that we are working with a series with alternating terms.)
 If for all n, and , then s converges.
As the n^{th}term test showed, having the n^{th} term go to 0 does not necessarily prove that a series converges. However, if the series is an alternating series, then it does prove convergence. Why this exception?
If terms are alternating between positive and negative, and their absolute values are getting smaller, then the terms should "hone in" on some number! Each positive term will "overshoot," and each negative term will "undershoot," but each term will bring us closer to the sum. It will be impossible for the sum to increase without bound. (Include an image for this.)
One example of the alternating series test is the alternating harmonic series...
One example of the alternating series test is the alternating harmonic series:
 .
As mentioned as an example of the n^{th}term test, the harmonic series diverges even though its n^{th} term goes to 0. The n^{th} term of the alternating harmonic series likewise goes to 0:
 .
Likewise, for all n,
 .
So the alternating series test shows us that the alternating harmonic series converges. We will revisit this series later to see what it converges to.
Consider another series:
This is an alternating series. We see that, for all n,
 .
However, we must also observe the limit:
 .
Since the n^{th} term doesn't go to 0 as n goes to infinity (even though the absolute value of each term becomes smaller), the series diverges.
Recall that we have written our alternating series in the form...
Recall that we have written our alternating series in the form:
 where .
We assume:
 for all n, and
 .
We must show that s converges.
By 1. above, it follows:
 for all n.
We observe the following even partial sums:
 .
Each even (2n^{th}) partial sum is the previous even partial sum, plus the following odd and minus the following even terms. By what was established previously, each even partial sum is also greater than the previous even partial sum. We mathematize these generalizations:
 .
This demonstrates that is increasing.
Let's expand fully:
 .
We can rewrite and group these terms alternatively:
 .
As we've shown, are all positive quantities. Therefore, (it is minus only positive values).
Since is increasing and bounded above by , converges (we will call the value it converges to L). It remains to be shown that the odd case, also converges to L. This is relatively easy to show, however:
 .
By 1. above, the second limit goes to 0 as n goes to infinity. We are left with the first limit, which as we found earlier converges to L. Since the series consisting of both even and odd numbers of terms both converge to the same value, the series as a whole converges as well.
Comparison Test
The comparison test can be used to show either convergence or divergence. The comparison test, as the name implies, involves the comparison of one series, which is known beforehand to be either convergent or divergent, to the series whose convergence or divergence we are trying to ascertain. We call the chosen series the comparison series. A successful comparison depends on our choosing the right comparison series.
This test requires series with nonnegative terms. Recall, however, the definition of absolute convergence defined above. If we have a series with some negative terms, we can try taking the absolute value of each term. If the resulting series converges, then the original series with some negative terms also converges.
 Theorem: Consider a series with nonnegative terms:
 .
 If there exists another convergent series with nonnegative terms
 such that for all n, then s is also convergent.
 Likewise, if there exists another divergent series with nonnegative terms
 such that for all n, then s is also divergent.
More specifically, we can truncate s after any number N of terms, and then compare the terms following the (N  1)^{th} term to another series. This works because the terms up to N are finite and so have a finite sum. In other words, our comparison only must hold for sufficiently large n; that is, there must exist some N such that the comparison holds for all n greater than N.
The comparison test is inconclusive when:
 the terms of s are greater than or equal to the corresponding terms of a convergent series.
 the terms of s are less than or equal to the tcorresponding terms of a divergent series.
Geometric series are particularly useful when we are using the comparison test because it is easy to know when they do and do not converge, based on th [...]
Geometric series are particularly useful when we are using the comparison test because it is easy to know when they do and do not converge, based on their common ratios. As such, if we find that each term of a given series is smaller than the corresponding term of some geometric series with common ratio between 1 and 1, then we know that the series converges. For example, consider the series:
 .
Each term is smaller than the corresponding term in the geometric series:
 .
Therefore, s converges.
Consider another series:
 .
We need only compare s with the harmonic series (which, as previously mentioned, diverges) to see that it diverges also. Since
 for all ,
the series diverges.
Consider another series:
 .
Does this series converge or diverge? If we compare it to the harmonic series, which diverges, we find that each term is smaller than that of the Harmonic series (inconclusive). If we compare it to a convergent geometric series with common ratio , we find that each term of s after n = 5 is larger than that of the geometric series (inconclusive). Choosing a geometric series with a larger common ratio for comparison would "delay" the number n of terms after which the geometric series becomes smaller than s, but ultimately, comparison of s with any convergent geometric series will be inconclusive.
The above series s is what we call a p series with p = 2, and it does, in fact, converge (at least for this value of p). (A p series is a series with n^{th} term .) However, we will have to use other means to determine convergence. We will revisit this series in later tests.
The proof of the comparison test is rather intuitive...
The proof of the comparison test is rather intuitive. Suppose we have two series with nonnegative terms:
Suppose that we know that t converges to some value . If every term of s is less than or equal to its corresponding term in t, a convergent series, then s must converge to some .
Alternatively, suppose that we know that t diverges. If every term of s is larger than its corresponding term in t, a divergent series, then s must diverge because its sum would have to be larger than that of t.
Let us also justify that we need only find a comparison series for sufficiently large n (for all n greater than some N). Suppose we have the series:
 .
Let t be a convergent series:
 .
Suppose, however, that . Does this mean that the comparison test is inconclusive? Not necessarily.
The series is a finite series that has a finite sum. We can still, then, consider the infinite series beginning with the term :
 .
If each term of this series is less than its corresponding term in t, then this series must converge and have a sum less than t. We can the add the finite number of terms in the series and keep our finite sum. Therefore, we only need to find a comparison series for sufficiently large n.
The comparison test is fairly basic. The tests provided later can generally be applied in a more straightforward way, but we will find that the theory behind each other test ultimately relies on the comparison test. Watch out for this pattern in later proofs of tests like the ratio test and root test. These tests draw their power from essentially comparing any series to a generalized geometric series, which eliminates the need for the "tester" to choose a comparison series. (is this series getting too far ahead of the page?)
Limit Test
The limit test is another form of comparison test. One chooses another comparison series for which the n^{th} term is known.
 Theorem: Let be a series with nonnegative terms whose convergence or divergence we are attempting to ascertain. Let be the chosen comparison series with nonnegative terms. Consider the limit:
 Based on the evaluation of the above limit and what we know about t, we may be able to determine something about s. In particular:
 If the limit is nonzero and less than infinity, then the two series either both converge or both diverge. Since we chose t knowing whether it converges or diverges, we know that s does the same.
 If the limit is 0 and t converges, then s converges as well.
 If the limit is and t diverges, then s diverges as well.
If follows that the limit test is inconclusive when:
 The limit is 0 and t diverges.
 The limit is and t converges.
Our goal in using the limit test is to select a comparison series that seems like it would behave like the original series...
Our goal in using the limit test is to select a comparison series that seems like it would behave like the original series, s, for large n. We select a series that we know is either convergent or divergent. What does this mean?
Suppose we have the series
 .
Does it converge? We know that when we take limits of fractions of polynomials, we can look at the leading terms, so we might suppose that s would behave similarly to a series with n^{th} term . Such a series (a p series with p = 2) is convergent.
However, we can't use the comparison test successfully because for sufficiently large n,
 .
For large n, the n^{th} term of s is larger than the n^{th} term of our convergent comparison series, so the comparison test is inconclusive. This is why we use the limit test.
 .
This limit is 1. Therefore, by the limit test, s converges as well.
Consider the series
 .
We might guess that this series will behave like a series with n^{th} term . So we compare:
 .
Since , a constant, this limit is 0. Therefore, since the comparison series is convergent, s is convergent.
We must consider 4 cases, depending on whether the chosen series t is convergent or divergent and depending on the limit...
We must consider 4 cases, depending on whether the chosen series t is convergent or divergent and depending on the limit of Eq. 6.
The first case will be proven rigorously. The other cases are fairly similar, and so their proofs will be slightly abbreviated, but the logic behind them will follow that of Case 1.
Case 1: The limit of Eq. 6 exists and is nonzero, and t converges.
 Since the limit exists, we have
 for some . (L must be positive because both series have nonnegative terms.)
 There exists some N such that for all n greater than N,
 .
 In other words, since the limit of the ratio between the terms and as n goes to infinity is L, after a certain number (N) of terms, it must be true that the ratio is less than L + 1.
 We then consider the series
 .
 Since t is convergent, the above series is also convergent (it is just a scalar multiple of a convergent series). We also know that
 for sufficiently large n.
 Therefore, by the comparison test, s converges because each of its terms for n greater than N is smaller than the corresponding term of a convergent series .
Case 2: The limit of Eq. 6 exists and is nonzero, and t diverges.
 We can again say that Eq. 6 is equal to some number L. In this case, we will note that there exists some number N such that for all n greater than N
 The sequence must, as a scalar multiple of divergent series t, diverge. Since, for sufficiently large n, each term of s is greater than the corresponding term of a divergent series, s diverges as well.
Case 3: The limit of Eq. 6 is 0, and t converges.
 In this case, we find that the limit of Eq. 6 is 0. Then we know that, for sufficiently large n, it is true that
 .
 t converges, and for sufficiently large n each term of s is less than the corresponding term of t. By the comparison test, s converges.
Case 4: The limit of Eq. 6 is infinity, and t diverges.
 In this case, the limit does not exist (or, we can say, is infinity). Then we know that, for sufficiently large n, it is true that
 .
 t diverges, and for sufficiently large n each term of s is greater than the corresponding term of t. By the comparison test, s diverges.
Ratio Test
The comparison test and the limit test both work for series with nonnegative terms. What if a series has alternating terms? The comparison tests also can be somewhat unwieldy and tedious in the respect that they require us to choose a comparison series, which may or may not be what we are looking for. While the ratio test is not always conclusive, we can find out whether it will get us an answer pretty quickly, whereas when the comparison tests are inconclusive, it's not clear whether we should think of another comparison series or try another test altogether.
The ratio test, in essence, depends on the growth rate of a given series as n becomes arbitrarily large.
 Theorem: Let be the series whose convergence or divergence we are trying to ascertain. We consider the limit:
 .
 We obtain a certain results in two possible cases:
 If the above limit is less than 1, the series converges.
 If the above limit is greater than 1, the series diverges.
The ratio test is inconclusive when the above limit is equal to 1.
As mentioned above, the ratio test relies on our knowledge of the growth rates of functions. The ratio between the terms and generally simplifies to a fraction containing different types of functions. As such, evaluating the limit of Eq. 7 requires knowledge of how different sorts of functions behave for large n. The growth rates of some common functions, from fastest to slowest, are:
 factorial, ie.
 exponential, ie.
 polynomial, ie.
 logarithmic, ie.
This hierarchy does necessarily hold for all n, but it will hold for all sufficiently large n. For instance, the growth of might be much less than the growth of for low values of x, but there must exist some sufficiently large x, past which the growth rate of f is greater than that of g (by sufficiently large, again, we mean that there is some X such that the growth rate of f is greater than that of g for all ). In other words, for large enough x,
 .
The following examples will illustrate how these growth rates play out in the application of the ratio test.
...
Consider a series
 .
Does it converge? The ratio test gives us a quick way to determine whether or not it does. Just compute the limit:
 .
The expression in the parentheses will go to 1 as n goes to infinity, since the fractions with n in the denominator goes to 0. The fraction outside the parentheses, , likewise goes to 0. Therefore the limit evaluates to 1 multiplied 0; the limit is 0. Since the limit is less than 1, s converges.
Consider the series
 .
We compute:
.
We need not expand the fraction with the polynomials; we know that the leading terms will both be to the third power and will have coefficients 1, so we know the limit of the fraction as n goes to infinity will be 1. Therefore, the whole limit evaluates to 2. Since 2 is greater than 1, the series diverges.
We might have been able to anticipate this result by looking at growth rates, as described above. The n^{th} term of s is . There is exponential growth in the numerator and polynomial growth in the denominator. We should expect terms should become very large for large n, since the numerator will grow faster than the denominator. And this is what the ratio test tells us! For arbitrarily large n, the ratio between two successive terms tends toward 2; this means that, for large n, the series begins to behave like a geometric series with common ratio 2, which certainly diverges!
The ratio test is powerful, but it does not always get us an answer. Consider the series with n^{th} term , which we know to diverge. We get the ratio:
.
The above limit is 1, at which value the ratio test is inconclusive.
The ratio test essentially looks at the ratio between two successive terms for arbitrarily large n and determines whether...
The ratio test essentially looks at the ratio between two successive terms for arbitrarily large n and determines whether a successful comparison could be made with a geometric series.
Let be the series whose convergence or divergence we are trying to ascertain.
Case 1: The limit of Eq. 7 is less than 1, so s converges absolutely.
 Suppose we found that, for all , it is true that
 where (so ).
 We construct the following convergent geometric series:
 where (so ).
 Since, in this case, Eq. 7 is less than 1, we can choose some such that
 .
 Since, for all sufficiently large n, the ratio of each successive pair of terms is less than , we can successfully compare s with Eq. 8, so s converges also.
Case 2: The limit of Eq. 7 is greater than 1, so s diverges.
 A similar method is employed to demonstrate that s diverges. Instead suppose that we find that, for all ,
 where .
 We construct the following divergent geometric series:
 .
 Since, in this case, Eq. 7 is greater than 1, we can choose some such that
 .
 Since, for sufficiently large n, the ratio of each successive pair of terms is greater than , we can successfully compare s with Eq. 9, so s diverges also.
Note that the ratio test, although it is not a "comparison test" in the respect that we must choose a comparison series, relies on the comparison test!
Root Test
The root test functions similarly to the ratio test.
 Theorem: Let be the series whose convergence or divergence we are trying to ascertain. We consider the limit:
 .
 Based on this limits, we may reach a definite conclusion:
 If the above limit evaluates to be less than 1, then s converges absolutely.
 If the above limit evaluates to be greater than 1, then s diverges.
Like the ratio test, the root test is inconclusive when the above limit evaluates to 1.
The root test will give us the same result as the ratio test, but sometimes can be easier to apply, depending on how is written (the root test is particularly helpful when the n^{th} term is to the n^{th} power). The following examples will illustrate such circumstances.
...
The n^{th} root test is useful when we are given series like this:
 .
Since the n^{th} term is to the n^{th} power, the root test is particularly effective:
 .
Since this expression goes to 0 as n goes to infinity, s converges.
The root test is proved in a way similar to the ratio test...
The root test is proved in a way similar to the ratio test.
Case 1: The limit of Eq. 10 is less than 1, so s converges absolutely.
 We can choose some such that
 .
 Equivalently, we can say that, for all ,
 .
 So we can construct a convergent geometric series:
 .
 The above series will, by construction, compare successfully with the series
 .
 Therefore, s converges absolutely.
Case 2: The limit of Eq. 10 is greater than 1, so s diverges.
 We can choose some such that
 .
 Equivalently, we can say that, for all ,
 .
 So we can construct a divergent geometric series:
 .
 The above series will, by construction, compare successfully with the series
 .
 Therefore, s diverges.
Note that, like the ratio test, the root test essentially "finds," based on the limit as n goes to infinity of the n^{th} root of the n^{th} term, whether there is a geometric series with which s can be successfully compared.
Integral Test
The integral test works for series that are positive and monotone decreasing on some interval .
 Theorem: Suppose we have a series
 .
 The series converges if the improper integral
 is finite. If the above integral diverges, then the series diverges as well.
...
...
Power Series
A power series is a series with a variable to some power in each term. The general form of a power series is:
 .
(Note that we have used to denote the constant term, unlike the convention used throughout the page in which denotes the first term. This is chosen here so that the subscripts of the coefficients correspond with the exponential order of each term. It will not significantly impact our math.)
Power series can be very useful; Taylor series, which are a way of approximating infinitely differentiable functions, for instance, are a form of power series. Since the convergence of series depends on the value of each term, the convergence of power series often depends on the variable. As such we can use the rules of convergence to establish a radius of convergence, the interval on which the series converges.
The last few tests we introduced, the ratio test, the root test, and the integral test, are particularly useful because they allow us to determine...
The last few tests we introduced, the ratio test, the root test, and the integral test, are particularly useful because they allow us to determine the radius of convergence. Because we don't have to select another comparison series, the expressions evaluated in the tests just depend on the n^{th} term of the series we are investigating. This lets us look at the effect a variable (say, x) has on whether a series converges.
For example, consider the series:
 .
This is a geometric series with common ratio x. It should converge for . Can we confirm this result? Using the ratio test (Eq. 7), we evaluate:
 .
This limit must be less than 1 for the series to converge absolutely. And we find that it is less than 1 for .
What about for more complicated series? Say we have:
 .
Here we can easily apply the root test, since the whole expression for the n^{th} term is to the n^{th} power:
 .
We have conveniently eliminated n from the limit expression, so it simply goes to ! We need this expression to be less than 1, so we set up the inequality:
 .
Rearranging this, we obtain:
as the radius of convergence.
Consider the series:
 .
What is the radius of convergence for this series?
Again we use the ratio test:
 .
Since is a constant, regardless of how large x is, this expression goes to 0 as n goes to infinity. Therefore, the series converges for all x.
Harmonic Series
The Harmonic series provides us with an interesting challenge according to the tests we've established. The harmonic series is as follows:
The harmonic series diverges. However, it's not obvious that it should, and it's not straightforward to demonstrate using the tests we have shown already.
The divergence of the harmonic series confounds our intuition. We would expect that, for large n, a series with n^{th} term would eventually stop growing, but it in fact keeps growing without bound, albeit slowly. Examples/pictures of harmonic series growth?
As mentioned previously, the n^{th} root test is inconclusive: as n goes to infinity, goes to 0. What if we compare it to a geometric series with, say, common ratio ?
 .
Although the first two terms are equal, the third term of the harmonic series is greater than the third term of the geometric series. We could try choosing a larger common ratio, which would slow the growth rate of our chosen geometric series, but that would only delay the value of N after which the harmonic series' terms would begin to exceed those of the geometric series.
What about the ratio test? We must consider the limit:
 .
This limit is 1, so even the ratio test is inconclusive.
The trick of proving that the harmonic series diverges lies, in fact, in the comparison test. We saw that comparison with a geometric series was not conclusive. Several mathematicians have nevertheless thought of other series which show that the harmonic series diverges. Here we will give just one example.
Consider the series:
 .
This series may, at first, appear to have no pattern; some of the terms appear more than once, and the number of times that they appear is not consistent. However, the term appears once, the term appears twice, and the term appears 4 times. With the exception of the first term 1, each term appears half of the number of times as the number in its denominator. We recognize that, if the series were continued, the ninthth term would be (and the term would repeat 8 times).
How does this help us prove the divergence of the harmonic series? Since we are using the comparison test, we need to know that Eq. X diverges if we are to show that the harmonic series diverges. We do this be slightly modifying the way we've written Eq. X:
Each term repeats k / 2 times, and , so we obtain a divergent series (by the n^{th}term test).
So our chosen series is divergent. We still must show that each of the terms of the harmonic series is greater than or equal to its corresponding term in Eq. X. Let's try putting the two series alongside each other:
As we can see above, the first, second, fourth, and eighth terms are equal. The third, fifth, sixth, and seventh terms of the harmonic series are greater than those of the corresponding terms of Eq. X. By the comparison test, then, the harmonic series diverges.
Alternating Harmonic Series
The harmonic series should not be confused with the alternating harmonic series, which is, as its name implies, the harmonic series with terms alternating between positive and negative:
The alternating harmonic series converges. We can even find its sum. Previously, we have found the sum of certain series via Eq. 5, by simply observing the limit of the sequence of partial sums (see Eq. 4) as n goes to infinity. What happens if we try to do this for the alternating harmonic series? The sequence of partial sums is:
 )
 .
Based on the above, finding a general expression, in terms of n, for the n^{th} partial sum seems neither promising nor likely. What can we do instead? We will use Taylor series. In particular, we will use the Taylor series for the natural logarithm, the derivation of which requires some calculus and background knowledge of Taylor series.
For the purposes of this page, we will just provide the Taylor series for the natural logarithm:
 , for .
If we plug in x = 1, we get:
 .
The righthand side is the alternating harmonic series. Since x = 1 is in the domain of the Taylor series for , the series converges to .
As it turns out, is about equal to 0.6931. is about equal to 0.7595. Maybe if we went out several more terms in our sequence of partial sums, we would be able guess that the alternating harmonic series converges to , if we knew what we were looking for. The Taylor series confirms the answer much more easily, and its result is exact.
We have shown that the alternating harmonic series is convergent but not absolutely convergent. Recall that, for a series to be absolutely convergent, the series consisting of the absolute value of each of its terms must converge. But the series consisting of the absolute values of the terms of the alternating harmonic series is the regular harmonic series, which diverges! So while absolute convergence proves convergence, not all convergent series converge absolutely.