Skip to main content
Logo image

Section 4.1 An introduction to eigenvalues and eigenvectors

This section introduces the concept of eigenvalues and eigenvectors and offers an example that motivates our interest in them. The point here is to develop an intuitive understanding of eigenvalues and eigenvectors and explain how they can be used to simplify some problems that we have previously encountered. In the rest of this chapter, we will develop this concept into a richer theory and illustrate its use with more meaningful examples.

Preview Activity 4.1.1.

Before we introduce the definition of eigenvectors and eigenvalues, it will be helpful to remember some ideas we have seen previously.
  1. Suppose that \(\vvec\) is the vector shown in the figure. Sketch the vector \(2\vvec\) and the vector \(-\vvec\text{.}\)
  2. State the geometric effect that scalar multiplication has on the vector \(\vvec\text{.}\) Then sketch all the vectors of the form \(\lambda \vvec\) where \(\lambda\) is a scalar.
  3. State the geometric effect of the matrix transformation defined by
    \begin{equation*} \left[\begin{array}{rr} 3 \amp 0 \\ 0 \amp -1 \\ \end{array}\right]\text{.} \end{equation*}
  4. Suppose that \(A\) is a \(2\times 2\) matrix and that \(\vvec_1\) and \(\vvec_2\) are vectors such that
    \begin{equation*} A\vvec_1 = 3 \vvec_1, \qquad A\vvec_2 = -\vvec_2\text{.} \end{equation*}
    Use the linearity of matrix multiplication to express the following vectors in terms of \(\vvec_1\) and \(\vvec_2\text{.}\)
    1. \(A(4\vvec_1)\text{.}\)
    2. \(A(\vvec_1 + \vvec_2)\text{.}\)
    3. \(A(4\vvec_1 -3\vvec_2)\text{.}\)
    4. \(A^2\vvec_1\text{.}\)
    5. \(A^2(4\vvec_1 - 3\vvec_2)\text{.}\)
    6. \(A^4\vvec_1\text{.}\)

Subsection 4.1.1 A few examples

We will now introduce the definition of eigenvalues and eigenvectors and then look at a few simple examples.

Definition 4.1.1.

Given a square \(n\times n\) matrix \(A\text{,}\) we say that a nonzero vector \(\vvec\) is an eigenvector of \(A\) if there is a scalar \(\lambda\) such that
\begin{equation*} A\vvec = \lambda \vvec\text{.} \end{equation*}
The scalar \(\lambda\) is called the eigenvalue associated to the eigenvector \(\vvec\text{.}\)
At first glance, there is a lot going on in this definition so let’s look at an example.

Example 4.1.2.

Consider the matrix \(A = \left[\begin{array}{rr} 7 \amp 6 \\ 6 \amp -2 \\ \end{array}\right]\) and the vector \(\vvec=\twovec{2}{1}\text{.}\) We find that
\begin{equation*} A\vvec = \left[\begin{array}{rr} 7 \amp 6 \\ 6 \amp -2 \\ \end{array}\right] \twovec{2}{1} = \twovec{20}{10} =10\twovec{2}{1} =10\vvec\text{.} \end{equation*}
In other words, \(A\vvec = 10\vvec\text{,}\) which says that \(\vvec\) is an eigenvector of the matrix \(A\) with associated eigenvalue \(\lambda = 10\text{.}\)
Similarly, if \(\wvec = \twovec{-1}{2}\text{,}\) we find that
\begin{equation*} A\wvec = \left[\begin{array}{rr} 7 \amp 6 \\ 6 \amp -2 \\ \end{array}\right] \twovec{-1}{2} = \twovec{5}{-10} =-5\twovec{-1}{2} =-5\wvec\text{.} \end{equation*}
Here again, we have \(A\wvec = -5\wvec\) showing that \(\wvec\) is an eigenvector of \(A\) with associated eigenvalue \(\lambda=-5\text{.}\)

Activity 4.1.2.

This definition has an important geometric interpretation that we will investigate here.
  1. Suppose that \(\vvec\) is a nonzero vector and that \(\lambda\) is a scalar. What is the geometric relationship between \(\vvec\) and \(\lambda\vvec\text{?}\)
  2. Let’s now consider the eigenvector condition: \(A\vvec = \lambda\vvec\text{.}\) Here we have two vectors, \(\vvec\) and \(A\vvec\text{.}\) If \(A\vvec = \lambda\vvec\text{,}\) what is the geometric relationship between \(\vvec\) and \(A\vvec\text{?}\)
  3. Instructions.
    The sliders in the diagram below allow you to choose a matrix \(A = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}\text{.}\) The vector \(\vvec\text{,}\) shaded red, may be moved by clicking in the head of the vector. The vector \(A\vvec\) is then shown in outline.
    Figure 4.1.3. A geometric interpretation of the eigenvalue-eigenvector condition \(A\vvec = \lambda\vvec\) .
    Choose the matrix \(A= \left[\begin{array}{rr} 1\amp 2 \\ 2\amp 1 \\ \end{array}\right] \text{.}\) Move the vector \(\vvec\) so that the eigenvector condition holds. What is the eigenvector \(\vvec\) and what is the associated eigenvalue?
  4. By algebraically computing \(A\vvec\text{,}\) verify that the eigenvector condition holds for the vector \(\vvec\) that you found.
  5. If you multiply the eigenvector \(\vvec\) that you found by \(2\text{,}\) do you still have an eigenvector? If so, what is the associated eigenvalue?
  6. Are you able to find another eigenvector \(\vvec\) that is not a scalar multiple of the first one that you found? If so, what is the eigenvector and what is the associated eigenvalue?
  7. Now consider the matrix \(A = \left[\begin{array}{rr} 2 \amp 1 \\ 0 \amp 2 \\ \end{array}\right] \text{.}\) Use the diagram to describe any eigenvectors and associated eigenvalues.
  8. Finally, consider the matrix \(A = \left[\begin{array}{rr} 0 \amp -1 \\ 1 \amp 0 \\ \end{array}\right] \text{.}\) Use the diagram to describe any eigenvectors and associated eigenvalues. What geometric transformation does this matrix perform on vectors? How does this explain the presence of any eigenvectors?
Let’s consider the ideas we saw in the activity in some more depth. To be an eigenvector of \(A\text{,}\) the vector \(\vvec\) must satisfy \(A\vvec = \lambda\vvec\) for some scalar \(\lambda\text{.}\) This means that \(\vvec\) and \(A\vvec\) are scalar multiples of each other so they must lie on the same line.
Consider now the matrix \(A = \left[\begin{array}{rr} 1 \amp 2 \\ 2 \amp 1 \\ \end{array}\right] \text{.}\) On the left of Figure 4.1.4, we see that \(\vvec=\twovec{1}{0}\) is not an eigenvector of \(A\) since the vectors \(\vvec\) and \(A\vvec\) do not lie on the same line. On the right, however, we see that \(\vvec=\twovec{1}{1}\) is an eigenvector. In fact, \(A\vvec\) is obtained from \(\vvec\) by stretching \(\vvec\) by a factor of \(3\text{.}\) Therefore, \(\vvec\) is an eigenvector of \(A\) with eigenvalue \(\lambda = 3\text{.}\)
Figure 4.1.4. On the left, the vector \(\vvec\) is not an eigenvector. On the right, the vector \(\vvec\) is an eigenvector with eigenvalue \(\lambda = 3\text{.}\)
It is not difficult to see that any multiple of \(\twovec{1}{1}\) is also an eigenvector of \(A\) with eigenvalue \(\lambda = 3\text{.}\) Indeed, we will see later that all the eigenvectors associated to a given eigenvalue form a subspace of \(\real^n\text{.}\)
In Figure 4.1.5, we see that \(\vvec=\twovec{-1}{1}\) is also an eigenvector with eigenvalue \(\lambda =-1\text{.}\)
Figure 4.1.5. Here we see another eigenvector \(\vvec\) with eigenvalue \(\lambda = -1\text{.}\)
The interactive diagram we used in the activity is meant to convey the fact that the eigenvectors of a matrix \(A\) are special vectors. Most of the time, the vectors \(\vvec\) and \(A\vvec\) appear visually unrelated. For certain vectors, however, \(\vvec\) and \(A\vvec\) line up with one another. Something important is going on when that happens so we call attention to these vectors by calling them eigenvectors. For these vectors, the operation of multiplying by \(A\) reduces to the much simpler operation of scalar multiplying by \(\lambda\text{.}\) The reason eigenvectors are important is because it is extremely convenient to be able to replace matrix multiplication by scalar multiplication.

Subsection 4.1.2 The usefulness of eigenvalues and eigenvectors

In the next section, we will introduce an algebraic technique for finding the eigenvalues and eigenvectors of a matrix. Before doing that, however, we would like to discuss why eigenvalues and eigenvectors are so useful.
Let’s continue looking at the example \(A = \left[\begin{array}{rr} 1 \amp 2 \\ 2 \amp 1 \\ \end{array}\right] \text{.}\) We have seen that \(\vvec_1 = \twovec{1}{1}\) is an eigenvector with eigenvalue \(\lambda=3\) and \(\vvec_2=\twovec{-1}{1}\) is an eigenvector with eigenvalue \(\lambda = -1\text{.}\) This means that \(A\vvec_1 = 3\vvec_1\) and \(A\vvec_2=-\vvec_2\text{.}\) By the linearity of matrix multiplication, we can determine what happens when we multiply a linear combination of \(\vvec_1\) and \(\vvec_2\) by \(A\text{:}\)
\begin{equation*} A(c_1\vvec_1 + c_2\vvec_2) = 3c_1\vvec_1 - c_2\vvec_2\text{.} \end{equation*}
For instance, if we consider the vector \(\xvec=\vvec_1-2\vvec_2\text{,}\) we find that
\begin{equation*} \begin{aligned} A\xvec\amp{}={}A(\vvec_1 - 2\vvec_2) \\ A\xvec\amp{}={} 3\vvec_1 + 2\vvec_2 \end{aligned} \end{equation*}
as seen in the figure.
In other words, multiplying by \(A\) has the effect of stretching a vector \(\xvec\) in the \(\vvec_1\) direction by a factor of \(3\) and flipping \(\xvec\) in \(\vvec_2\) direction.
We can draw an analogy with the more familiar example of the diagonal matrix \(D=\left[\begin{array}{rr} 3 \amp 0 \\ 0 \amp -1 \\ \end{array}\right] \text{.}\) As we have seen, the matrix transformation defined by \(D\) combines a horizontal stretching by a factor of 3 with a reflection across the horizontal axis, as is illustrated in Figure 4.1.6.
Figure 4.1.6. The diagonal matrix \(D\) stretches vectors horizontally by a factor of \(3\) and flips vectors vertically.
The matrix \(A=\left[\begin{array}{rr} 1 \amp 2 \\ 2 \amp 1 \\ \end{array}\right]\) has a similar effect when viewed in the basis defined by the eigenvectors \(\vvec_1\) and \(\vvec_2\text{,}\) as seen in Figure 4.1.7.
Figure 4.1.7. The matrix \(A\) has the same geometric effect as the diagonal matrix \(D\) when expressed in the coordinate system defined by the basis of eigenvectors.
In a sense that will be made precise later, having a set of eigenvectors of \(A\) that forms a basis of \(\real^2\) enables us to think of \(A\) as being equivalent to a diagonal matrix \(D\text{.}\) Of course, as the other examples in the previous activity show, it may not always be possible to form a basis from the eigenvectors of a matrix. For example, the only eigenvectors of the matrix \(\left[\begin{array}{rr} 2 \amp 1 \\ 0 \amp 2 \end{array}\right]\text{,}\) which represents a shear, have the form \(\twovec{x}{0}\text{.}\) In this example, we are not able to create a basis for \(\real^2\) consisting of eigenvectors of the matrix. This is also true for the matrix \(\left[\begin{array}{rr} 0 \amp -1 \\ 1 \amp 0 \\ \end{array}\right] \text{,}\) which represents a \(90^\circ\) rotation.

Activity 4.1.3.

Let’s consider an example that illustrates how we can put these ideas to use.
Suppose that we work for a car rental company that has two locations, \(P\) and \(Q\text{.}\) When a customer rents a car at one location, they have the option to return it to either location at the end of the day. After doing some market research, we determine:
  • 80% of the cars rented at location \(P\) are returned to \(P\) and 20% are returned to \(Q\text{.}\)
  • 40% of the cars rented at location \(Q\) are returned to \(Q\) and 60% are returned to \(P\text{.}\)
  1. Suppose that there are 1000 cars at location \(P\) and no cars at location \(Q\) on Monday morning. How many cars are there are locations \(P\) and \(Q\) at the end of the day on Monday?
  2. How many are at locations \(P\) and \(Q\) at end of the day on Tuesday?
  3. If we let \(P_k\) and \(Q_k\) be the number of cars at locations \(P\) and \(Q\text{,}\) respectively, at the end of day \(k\text{,}\) we then have
    \begin{equation*} \begin{aligned} P_{k+1}\amp {}={} 0.8P_k + 0.6Q_k \\ Q_{k+1}\amp {}={} 0.2P_k + 0.4Q_k\text{.} \\ \end{aligned} \end{equation*}
    We can write the vector \(\xvec_k = \twovec{P_k}{Q_k}\) to reflect the number of cars at the two locations at the end of day \(k\text{,}\) which says that
    \begin{equation*} \xvec_{k+1} = \left[\begin{array}{rr} 0.8 \amp 0.6 \\ 0.2 \amp 0.4 \\ \end{array}\right] \xvec_k \end{equation*}
    or \(\xvec_{k+1} = A\xvec_k\) where \(A=\left[\begin{array}{rr}0.8 \amp 0.6 \\ 0.2 \amp 0.4 \end{array}\right]\text{.}\)
    Suppose that
    \begin{equation*} \vvec_1 = \twovec{3}{1}, \qquad \vvec_2 = \twovec{-1}{1}\text{.} \end{equation*}
    Compute \(A\vvec_1\) and \(A\vvec_2\) to demonstrate that \(\vvec_1\) and \(\vvec_2\) are eigenvectors of \(A\text{.}\) What are the associated eigenvalues \(\lambda_1\) and \(\lambda_2\text{?}\)
  4. We said that 1000 cars are initially at location \(P\) and none at location \(Q\text{.}\) This means that the initial vector describing the number of cars is \(\xvec_0 = \ctwovec{1000}{0}\text{.}\) Write \(\xvec_0\) as a linear combination of \(\vvec_1\) and \(\vvec_2\text{.}\)
  5. Remember that \(\vvec_1\) and \(\vvec_2\) are eigenvectors of \(A\text{.}\) Use the linearity of matrix multiplication to write the vector \(\xvec_1 = A\xvec_0\text{,}\) describing the number of cars at the two locations at the end of the first day, as a linear combination of \(\vvec_1\) and \(\vvec_2\text{.}\)
  6. Write the vector \(\xvec_2 = A\xvec_1\) as a linear combination of \(\vvec_1\) and \(\vvec_2\text{.}\) Then write the next few vectors as linear combinations of \(\vvec_1\) and \(\vvec_2\text{:}\)
    1. \(\xvec_3 = A\xvec_2\text{.}\)
    2. \(\xvec_4 = A\xvec_3\text{.}\)
    3. \(\xvec_5 = A\xvec_4\text{.}\)
    4. \(\xvec_6 = A\xvec_5\text{.}\)
  7. What will happen to the number of cars at the two locations after a very long time? Explain how writing \(\xvec_0\) as a linear combination of eigenvectors helps you determine the long-term behavior.
This activity is important and motivates much of our work with eigenvalues and eigenvectors so it’s worth reviewing to make sure we have a clear understanding of the concepts.
First, we compute
\begin{equation*} \begin{aligned} A\vvec_1 = \left[\begin{array}{rr} 0.8 \amp 0.6 \\ 0.2 \amp 0.4 \\ \end{array}\right] \twovec{3}{1} \amp {}={} \twovec{3}{1} = 1\vvec_1 \\ \\ A\vvec_2 = \left[\begin{array}{rr} 0.8 \amp 0.6 \\ 0.2 \amp 0.4 \\ \end{array}\right] \twovec{-1}{1} \amp {}={} \twovec{-0.2}{0.2} = 0.2\vvec_2\text{.} \\ \end{aligned} \end{equation*}
This shows that \(\vvec_1\) is an eigenvector of \(A\) with eigenvalue \(\lambda_1 = 1\) and \(\vvec_2\) is an eigenvector of \(A\) with eigenvalue \(\lambda_2=0.2\text{.}\)
By the linearity of matrix matrix multiplication, we have
\begin{equation*} A(c_1\vvec_1 + c_2\vvec_2) = c_1\vvec_1 + 0.2c_2\vvec_2\text{.} \end{equation*}
Therefore, we will write the vector describing the initial distribution of cars \(\xvec_0=\ctwovec{1000}{0}\) as a linear combination of \(\vvec_1\) and \(\vvec_2\text{;}\) that is, \(\xvec_0 = c_1\vvec_2 + c_2 \vvec_2\text{.}\) To do, we form the augmented matrix and row reduce:
\begin{equation*} \left[\begin{array}{rr|r} \vvec_1 \amp \vvec_2 \amp \xvec_0 \end{array}\right] = \left[\begin{array}{rr|r} 3 \amp -1 \amp 1000 \\ 1 \amp 1 \amp 0 \\ \end{array}\right] \sim \left[\begin{array}{rr|r} 1 \amp 0 \amp 250 \\ 0 \amp 1 \amp -250 \\ \end{array}\right]\text{.} \end{equation*}
Therefore, \(\xvec_0 = 250\vvec_1 -250\vvec_2\text{.}\)
To determine the distribution of cars on subsequent days, we will repeatedly multiply by \(A\text{.}\) We find that
\begin{equation*} \begin{aligned} \xvec_1 = A\xvec_0 \amp {}={} A(250\vvec_1 - 250\vvec_2) = 250\vvec_1 - (0.2)250\vvec_2 \\ \xvec_2 = A\xvec_1 \amp {}={} A(250\vvec_1 - (0.2)250\vvec_2) = 250\vvec_1 - (0.2)^2250\vvec_2 \\ \xvec_3 = A\xvec_2 \amp {}={} A(250\vvec_1 - (0.2)^2250\vvec_2) = 250\vvec_1 - (0.2)^3250\vvec_2 \\ \xvec_4 = A\xvec_3 \amp {}={} A(250\vvec_1 - (0.2)^3250\vvec_2) = 250\vvec_1 - (0.2)^4250\vvec_2 \\ \xvec_5 = A\xvec_4 \amp {}={} A(250\vvec_1 - (0.2)^4250\vvec_2) = 250\vvec_1 - (0.2)^5250\vvec_2 \\ \end{aligned}\text{.} \end{equation*}
In particular, this shows us that
\begin{equation*} \xvec_5 = 250\vvec_1 - (0.2)^5250\vvec_2 = \twovec{250\cdot 3 - (0.2)^5250\cdot(-1)} {250\cdot 1 - (0.2)^5250\cdot 1} = \twovec{750.09}{249.92}\text{.} \end{equation*}
Taking notice of the pattern, we may write
\begin{equation*} \xvec_k = 250\vvec_1 - (0.2)^k250\vvec_2\text{.} \end{equation*}
Multiplying a number by \(0.2\) is the same as taking 20% of that number. As each day goes by, the second term is multiplied by \(0.2\) so the coefficient of \(\vvec_2\) in the expression for \(\xvec_k\) will eventually become extremely small. We therefore see that the distribution of cars will stabilize at \(\xvec=250\vvec_1=\twovec{750}{250}\text{.}\)
Notice how our understanding of the eigenvectors of the matrix allows us to replace matrix multiplication with the simpler operation of scalar multiplication. As a result, we can look far into the future without having to repeatedly perform matrix multiplication.
Furthermore, notice how this example relies on the fact that we can express the initial vector \(\xvec_0\) as a linear combination of eigenvectors. For this reason, we would like, when given an \(n\times n\) matrix, to be able to create a basis of \(\real^n\) that consists of its eigenvectors. We will frequently return to this question in later sections.

Question 4.1.8.

If \(A\) is an \(n\times n\) matrix, can we form a basis of \(\real^n\) consisting of eigenvectors of \(A\text{?}\)

Subsection 4.1.3 Summary

We defined an eigenvector of a square matrix \(A\) to be a nonzero vector \(\vvec\) such that \(A\vvec = \lambda\vvec\) for some scalar \(\lambda\text{,}\) which is called the eigenvalue associated to \(\vvec\text{.}\)
  • If \(\vvec\) is an eigenvector, then matrix multiplication by \(A\) reduces to the simpler operation of scalar multiplication by \(\lambda\text{.}\)
  • Scalar multiples of an eigenvector are also eigenvectors. In fact, we will see that the eigenvectors associated to an eigenvalue \(\lambda\) form a subspace.
  • If we can form a basis for \(\real^n\) consisting of eigenvectors of \(A\text{,}\) then \(A\) is, in some sense, equivalent to a diagonal matrix.
  • Rewriting a vector \(\xvec\) as a linear combination of eigenvectors of \(A\) simplifies the process of repeatedly multiplying \(\xvec\) by \(A\text{.}\)

Exercises 4.1.4 Exercises

1.

Consider the matrix and vectors
\begin{equation*} A = \left[\begin{array}{rr} 8 \amp -10 \\ 5 \amp -7 \\ \end{array}\right],\qquad \vvec_1=\twovec{2}{1},~~~ \vvec_2=\twovec{1}{1}\text{.} \end{equation*}
  1. Show that \(\vvec_1\) and \(\vvec_2\) are eigenvectors of \(A\) and find their associated eigenvalues.
  2. Express the vector \(\xvec = \twovec{-4}{-1}\) as a linear combination of \(\vvec_1\) and \(\vvec_2\text{.}\)
  3. Use this expression to compute \(A\xvec\text{,}\) \(A^2\xvec\text{,}\) and \(A^{-1}\xvec\) as a linear combination of eigenvectors.

2.

Consider the matrix and vectors
\begin{equation*} A = \left[\begin{array}{rrr} -5 \amp -2 \amp 2 \\ 24 \amp 14 \amp -10 \\ 21 \amp 14 \amp -10 \\ \end{array}\right],\qquad \vvec_1=\threevec{1}{-2}{-1}, \vvec_2=\threevec{2}{-3}{0}, \vvec_3=\threevec{0}{-1}{-1} \end{equation*}
  1. Show that the vectors \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) and \(\vvec_3\) are eigenvectors of \(A\) and find their associated eigenvalues.
  2. Express the vector \(\xvec = \threevec{0}{-3}{-4}\) as a linear combination of the eigenvectors.
  3. Use this expression to compute \(A\xvec\text{,}\) \(A^2\xvec\text{,}\) and \(A^{-1}\xvec\) as a linear combination of eigenvectors.

3.

Suppose that \(A\) is an \(n\times n\) matrix.
  1. Explain why \(\lambda = 0\) is an eigenvalue of \(A\) if and only if there is a nonzero solution to the homogeneous equation \(A\xvec = 0\text{.}\)
  2. Explain why \(A\) is not invertible if and only if \(\lambda=0\) is an eigenvalue.
  3. If \(\vvec\) is an eigenvector of \(A\) having associated eigenvalue \(\lambda\text{,}\) explain why \(\vvec\) is also an eigenvector of \(A^2\) with associated eigenvalue \(\lambda^2\text{.}\)
  4. If \(A\) is invertible and \(\vvec\) is eigenvector of \(A\) having associated eigenvalue \(\lambda\text{,}\) explain why \(\vvec\) is also an eigenvector of \(A^{-1}\) with associated eigenvalue \(\lambda^{-1}\text{.}\)
  5. The matrix \(A=\ \left[\begin{array}{rr} 1 \amp 2 \\ 2 \amp 1 \\ \end{array}\right]\) has eigenvectors \(\vvec_1=\twovec{1}{1}\) and \(\vvec_2=\twovec{-1}{1}\) and associated eigenvalues \(\lambda_1 = 3\) and \(\lambda=-1\text{.}\) What are some eigenvectors and associated eigenvalues for \(A^5\text{?}\)

4.

Suppose that \(A\) is a matrix with eigenvectors \(\vvec_1\) and \(\vvec_2\) and eigenvalues \(\lambda_1 = -1\) and \(\lambda_2=2\) as shown in Figure 4.1.9.
Figure 4.1.9. The vectors \(\vvec_1\) and \(\vvec_2\) are eigenvectors of \(A\text{.}\)
Sketch the vectors \(A\xvec\text{,}\) \(A^2\xvec\text{,}\) and \(A^{-1}\xvec\text{.}\)

5.

For the following matrices, find the eigenvectors and associated eigenvalues by thinking geometrically about the corresponding matrix transformation.
  1. \(\left[\begin{array}{rr} 3 \amp 0 \\ 0 \amp 3 \\ \end{array}\right] \text{.}\)
  2. \(\left[\begin{array}{rr} -2 \amp 0 \\ 0 \amp 4 \\ \end{array}\right] \text{.}\)
  3. What are the eigenvectors and associated eigenvalues of the identity matrix?
  4. What are the eigenvectors and associated eigenvalues of a diagonal matrix with distinct diagonal entries?

6.

Suppose that \(A\) is a \(2\times2\) matrix having eigenvectors
\begin{equation*} \vvec_1=\twovec{2}{1}, \qquad \vvec_2=\twovec{-1}{2} \end{equation*}
and associated eigenvalues \(\lambda_1=2\) and \(\lambda_2=-3\text{.}\)
  1. If \(\xvec=\twovec{5}{0}\text{,}\) find the vector \(A^4\xvec\text{.}\)
  2. Find the vectors \(A\twovec10\) and \(A\twovec01\text{.}\)
  3. What is the matrix \(A\text{?}\)

7.

Determine whether the following statements are true or false and provide a justification for your response.
  1. The eigenvalues of a diagonal matrix are equal to the entries on the diagonal.
  2. If \(A\vvec=\lambda\vvec\text{,}\) then \(A^2\vvec=\lambda\vvec\) as well.
  3. Every vector is an eigenvector of the identity matrix.
  4. If \(\lambda=0\) is an eigenvalue of \(A\text{,}\) then \(A\) is invertible.
  5. For every \(n\times n\) matrix \(A\text{,}\) it is possible to find a basis of \(\real^n\) consisting of eigenvectors of \(A\text{.}\)

8.

Suppose that \(A\) is an \(n\times n\) matrix.
  1. Assuming that \(\vvec\) is an eigenvector of \(A\) whose associated eigenvector is nonzero, explain why \(\vvec\) is in \(\col(A)\text{.}\)
  2. Assuming that \(\vvec\) is an eigenvector of \(A\) whose associated eigenvector is zero, explain why \(\vvec\) is in \(\nul(A)\text{.}\)
  3. Consider the two special matrices below and find their eigenvectors and associated eigenvalues.
    \begin{equation*} A = \left[\begin{array}{rrr} 1 \amp 1 \amp 1 \\ 1 \amp 1 \amp 1 \\ 1 \amp 1 \amp 1 \\ \end{array}\right],~~~ B = \left[\begin{array}{rrr} 1 \amp 1 \amp 1 \\ 2 \amp 2 \amp 2 \\ 3 \amp 3 \amp 3 \\ \end{array}\right]. \end{equation*}

9.

For each of the following matrix transformations, describe the eigenvalues and eigenvectors of the corresponding matrix \(A\text{.}\)
  1. A reflection in \(\real^2\) in the line \(y=x\text{.}\)
  2. A \(180^\circ\) rotation in \(\real^2\text{.}\)
  3. A \(180^\circ\) rotation in \(\real^3\) about the \(y\)-axis.
  4. A \(90^\circ\) rotation in \(\real^3\) about the \(x\)-axis.

10.

Suppose we have two species, \(P\) and \(Q\text{,}\) where species \(P\) preys on \(Q\text{.}\) Their populations, in millions, in year \(k\) are denoted by \(P_k\) and \(Q_k\) and satisfy
\begin{equation*} \begin{aligned} P_{k+1} \amp {}={} 0.8P_k + 0.2Q_k \\ Q_{k+1} \amp {}={} -0.3P_k + 1.5Q_k \\ \end{aligned}\text{.} \end{equation*}
We will keep track of the populations in year \(k\) using the vector \(\xvec_k=\twovec{P_k}{Q_k}\) so that
\begin{equation*} \xvec_{k+1} = A\xvec_k = \left[\begin{array}{rr} 0.8 \amp 0.2 \\ -0.3 \amp 1.5 \\ \end{array}\right] \xvec_k\text{.} \end{equation*}
  1. Show that \(\vvec_1=\twovec{1}{3}\) and \(\vvec_2=\twovec{2}{1}\) are eigenvectors of \(A\) and find their associated eigenvalues.
  2. Suppose that the initial populations are described by the vector \(\xvec_0 = \twovec{38}{44}\text{.}\) Express \(\xvec_0\) as a linear combination of \(\vvec_1\) and \(\vvec_2\text{.}\)
  3. Find the populations after one year, two years, and three years by writing the vectors \(\xvec_1\text{,}\) \(\xvec_2\text{,}\) and \(\xvec_3\) as linear combinations of \(\vvec_1\) and \(\vvec_2\text{.}\)
  4. What is the general form for \(\xvec_k\text{?}\)
  5. After a very long time, what is the ratio of \(P_k\) to \(Q_k\text{?}\)