Skip to main content
Logo image

A First Course in Complex Analysis

Section 7.2 Series

Definition 7.2.1.

A series is a sequence \(\left( a_n \right)\) whose members are of the form \(a_n = \sum_{k=1}^n b_k\) (or \(a_n = \sum_{k=0}^n b_k\)); we call \(\left( b_k \right)\) the sequence of terms of the series. The \(a_n = \sum_{k=1}^n b_k\) (or \(a_n = \sum_{k=0}^n b_k\)) are the partial sums of the series.
If we wanted to be lazy we would define convergence of a series simply by referring to convergence of the partial sums of the series—after all, we just defined series through sequences. However, there are some convergence features that take on special appearances for series, so we mention them here explicitly. For starters, a series converges to the limit (or sum) \(L\) by definition if
\begin{equation*} \lim_{n \to \infty} a_n \ = \ \lim_{n \to \infty} \sum_{k=1}^n b_k \ = \ L \, \text{.} \end{equation*}
To prove that a series converges we use the definition of limit of a sequence: for any \(\epsilon > 0\) we have to find an \(N\) such that for all \(n \geq N\text{,}\)
\begin{equation*} \left| \sum_{k=1}^n b_k - L \right| \ \lt \ \epsilon \,\text{.} \end{equation*}
In the case of a convergent series, we usually write its limit as \(\ds L = \sum_{k=1}^\infty b_k\) or \(\ds L = \sum_{k \geq 1} b_k\text{.}\)

Example 7.2.2.

Fix \(z \in \C\) with \(|z| \lt 1\text{.}\) We claim that the geometric series \(\sum_{ k \ge 1 } z^k\) converges with limit
\begin{equation*} \sum_{ k \ge 1 } z^k \ = \ \frac z {1-z} \,\text{.} \end{equation*}
In this case, we can compute the partial sums explicitly:
\begin{equation*} \sum_{k=1}^n z^k \ = \ z + z^2 + \dots + z^n \ = \ \frac{ z - z^{n+1} }{ 1 - z } \, \text{,} \end{equation*}
whose limit as \(n \to \infty\) exists by Example 7.1.8, because \(|z|\lt 1\text{.}\)

Example 7.2.3.

Another series whose limit we can compute by manipulating the partial sums is
\begin{align*} \sum_{k\ge1} \frac1{k^2 + k} \amp \ = \ \lim_{n\to\infty}\sum_{k=1}^n \left(\frac1k-\frac1{k+1}\right)\\ \amp \ = \ \lim_{n\to\infty}\left(1-\frac12+\frac12 -\frac13 + \dots + \frac1n-\frac1{n+1}\right)\\ \amp \ = \ \lim_{n\to\infty}\left(1-\frac1{n+1}\right) \ = \ 1 \, \text{.} \end{align*}
A series where most of the terms cancel like this is called telescoping.
Most of the time we can use the completeness property to check convergence of a series, and it is fortunate that the Monotone Sequence Property has a convenient translation into the language of series of real numbers. The partial sums of a series form a nondecreasing sequence if the terms of the series are nonnegative, and this observation immediately yields the following:

Example 7.2.5.

With this new terminology, we can revisit Example 7.1.7: Let \(b_k = \frac 1 {k!}\text{.}\) In Example 7.1.7 we showed that the partial sums
\begin{equation*} \sum_{ k=1 }^{ n } b_k \ = \ \sum_{ k=1 }^{ n } \frac 1 {k!} \end{equation*}
are bounded, and \(\ds \sum_{ k \ge 1 } \frac 1 {k!} = e-1\text{.}\)
Although Corollary 7.2.4 is a mere direct consequence of the completeness property of \(\R\text{,}\) it is surprisingly useful. Here is one application, sometimes called the Comparison Test:

Proof.

By Corollary 7.2.4, the partial sums \(\sum_{k=1}^n b_k\) are bounded, and thus so are
\begin{equation*} \sum_{k=1}^n c_k \ \le \ \sum_{k=1}^n b_k \,\text{.} \end{equation*}
But this means, again by Corollary 7.2.4, that \(\sum_{k \ge 1} c_k\) converges.
The contrapositive of this proposition is often used, sometimes called the Test for Divergence:

Example 7.2.9.

Continuing Example 7.2.2, for \(|z| \ge 1\) the geometric series \(\sum_{ k \ge 1 } z^k\) diverges since in this case \(\lim_{ n \to \infty } z^n\) either does not exist or is not 0.
A common mistake is to try to use the converse of Proposition 7.2.7, but the converse is false:

Example 7.2.10.

The harmonic series \(\sum_{k\ge1} \frac 1 k\) diverges (even though the terms go to \(0\)): If we assume the series converges to \(L\text{,}\) then
\begin{align*} L \ \amp = \ 1+\frac12 + \frac13 + \frac 14 + \frac 15 + \frac 16 + \cdots\\ \amp > \ \frac 1 2 + \frac 1 2 + \frac 1 4 + \frac 1 4 + \frac 1 6 + \frac 1 6 + \cdots\\ \amp = \ 1 + \frac 1 2 + \frac 1 3 + \cdots\\ \amp = \ L \, \text{,} \end{align*}
a contradiction.
The Integral Test literally comes with a proof by picture—see Figure 7.2.12: the integral of \(f\) on the interval \([k,k+1]\) is bounded between \(f(k)\) and \(f(k+1)\text{.}\) Adding the pieces gives the inequalities above for the \(n\)th partial sum versus the integrals from \(1\) to \(n\) and from \(1\) to \(n+1\text{,}\) and the inequality persists in the limit.
Figure 7.2.12. The integral test.

Proof.

Suppose \(\int_1^\infty f(t)\,\diff{t} = \infty\text{.}\) Then the first inequality in Proposition 7.2.11 implies that the partial sums \(\sum_{k=1}^n f(k)\) are unbounded, and so Corollary 7.2.4 says that \(\sum_{k \ge 1} f(k)\) cannot converge.
Conversely, if \(\int_1^\infty f(t)\,\diff{t}\) is finite then the second inequality in Proposition 7.2.11 says that the partial sums \(\sum_{k=1}^n f(k)\) are bounded; thus, again with Corollary 7.2.4, we conclude that \(\sum_{k \ge 1} f(k)\) converges.

Example 7.2.14.

The series \(\sum_{k\ge1} \frac 1 {k^p}\) converges for \(p > 1\) and diverges for \(p \lt 1\) (and the case \(p=1\) was the subject of Example 7.2.10) because
\begin{equation*} \int_1^\infty \frac{ \diff{x} }{ x^p } \ = \ \lim_{ a \to \infty } \frac{ a^{ -p+1 } }{ -p+1 } + \frac 1 { p-1 } \end{equation*}
is finite if and only if \(p > 1\text{.}\)
By now you might be amused that we have collected several results on series whose terms are nonnegative real numbers. One reason is that such series are a bit easier to handle, another one is that there is a notion of convergence special to series that relates any series to one with only nonnegative terms:

Definition 7.2.15.

The series \(\ds \sum_{k \geq 1} b_k\) converges absolutely if \(\ds \sum_{k \geq 1} \left| b_k \right|\) converges.
This seems like an obvious statement, but its proof is, nevertheless, nontrivial.

Proof.

Suppose \(\sum_{k \geq 1} \left| b_k \right|\) converges. We first consider the case that each \(b_k\) is real. Let
\begin{equation*} b_k^+ := \begin{cases}b_k \amp \text{ if } b_k \ge 0, \\ 0 \amp \text{ otherwise } \end{cases} \qquad \text{ and } \qquad b_k^- := \begin{cases}b_k \amp \text{ if } b_k \lt 0, \\ 0 \amp \text{ otherwise. } \end{cases} \end{equation*}
Then \(0 \le b_k^+ \le |b_k|\) and \(0 \le - b_k^- \le |b_k|\) for all \(k \ge 1\text{,}\) and so by Corollary 7.2.6, both
\begin{equation*} \sum_{ k \ge 1 } b_k^+ \qquad \text{ and } \qquad - \sum_{ k \ge 1 } b_k^- \end{equation*}
converge. But then so does
\begin{equation*} \sum_{ k \ge 1 } b_k \ = \ \sum_{ k \ge 1 } b_k^+ + \sum_{ k \ge 1 } b_k^- \text{.} \end{equation*}
For the general case \(b_k \in \C\text{,}\) we write each term as \(b_k = c_k + i \, d_k\text{.}\) Since \(0 \le |c_k| \le |b_k|\) for all \(k \ge 1\text{,}\) Corollary 7.2.6 implies that \(\sum_{ k \ge 1 } c_k\) converges absolutely, and by an analogous argument, so does \(\sum_{ k \ge 1 } d_k\text{.}\) But now we can use the first case to deduce that both \(\sum_{ k \ge 1 } c_k\) and \(\sum_{ k \ge 1 } d_k\) converge, and thus so does
\begin{equation*} \sum_{ k \ge 1 } b_k \ = \ \sum_{ k \ge 1 } c_k + i \sum_{ k \ge 1 } d_k \, \text{.} \end{equation*}

Example 7.2.17.

Continuing Example 7.2.14,
\begin{equation*} \zeta(z) \ := \ \sum_{k\ge1} \frac 1 {k^z} \end{equation*}
converges for \(\Re(z)>1\text{,}\) because then (using Exercise 3.6.48)
\begin{equation*} \sum_{k\ge1} \left| k^{ -z } \right| \ = \ \sum_{k\ge1} k^{ - \Re(z) } \end{equation*}
converges. Viewed as a function in \(z\text{,}\) the series \(\zeta(z)\) is the Riemann zeta function, an indispensable tool in number theory and many other areas in mathematics and physics.
 1 
The Riemann zeta function is the subject of the arguably most famous open problem in mathematics, the Riemann hypothesis. It turns out that \(\zeta(z)\) can be extended to a function that is holomorphic on \(\C \setminus \{ 1 \}\text{,}\) and the Riemann hypothesis asserts that the roots of this extended function in the strip \(0 \lt \Re(z) \lt 1\) are all on the critical line \(\Re(z) = \frac 1 2\text{.}\)
Another common mistake is to try to use the converse of Theorem 7.2.16, which is also false:

Example 7.2.18.

The alternating harmonic series \(\sum_{k\ge1}\frac{(-1)^{k+1}}k\) converges:
\begin{align*} \sum_{k\ge1}\frac{(-1)^{k+1}}k \amp \ = \ 1-\frac12 + \frac13 - \frac 14 + \frac15 - \frac16 + \cdots\\ \amp \ = \ \left(1-\frac12\right) + \left(\frac13-\frac14\right) + \left(\frac15-\frac16\right) + \cdots \end{align*}
(There is a small technical detail to be checked here, since we are effectively ignoring half the partial sums of the original series; see Exercise 7.5.16.) Since
\begin{equation*} \frac1{2k-1}-\frac1{2k} \ = \ \frac1{2k(2k-1)} \ \le \ \frac 1 { (2k-1)^2 } \ \le \ \frac 1 { k^2 } \, \text{,} \end{equation*}
\(\sum_{k\ge1}\frac{(-1)^{k+1}}k\) converges by Corollary 7.2.6 and Example 7.2.14.
However, according to Example 7.2.10, \(\sum_{k\ge1}\frac{(-1)^{k+1}}k\) does not converge absolutely.