Section 3.1 Invertibility
Up to this point, we have used the Gaussian elimination algorithm to find solutions to linear systems. We now investigate another way to find solutions to the equation \(A\xvec=\bvec\) when the matrix \(A\) has the same number of rows and columns. To get started, let's look at some familiar examples.
Preview Activity 3.1.1.
Explain how you would solve the equation \(3x = 5\) using multiplication rather than division.
Find the \(2\times2\) matrix \(A\) that rotates vectors counterclockwise by \(90^\circ\text{.}\)
Find the \(2\times2\) matrix \(B\) that rotates vectors clockwise by \(90^\circ\text{.}\)
What do you expect the product \(AB\) to be? Explain the reasoning behind your expectation and then compute \(AB\) to verify it.
Solve the equation \(A\xvec = \twovec{3}{2}\) using Gaussian elimination.
Explain why your solution may also be found by computing \(\xvec = B\twovec{3}{2}\text{.}\)
Subsection 3.1.1 Invertible matrices
The preview activity began with a familiar type of equation, \(3x = 5\text{,}\) and asked for a strategy to solve it. One possible response is to divide both sides by 3. Instead, let's rephrase this as multiplying by \(3^{1} = \frac 13\text{,}\) the multiplicative inverse of 3.
Now that we are interested in solving equations of the form \(A\xvec = \bvec\text{,}\) we might try to find a similar approach. Is there a matrix \(A^{1}\) that plays the role of the multiplicative inverse of \(A\text{?}\) Of course, the real number \(0\) does not have a multiplicative inverse so we probably shouldn't expect every matrix to have a multiplicative inverse. We will see, however, that many do.
Definition 3.1.1.
An \(n\times n\) matrix \(A\) is called invertible if there is a matrix \(B\) such that \(AB = I_n\text{,}\) where \(I_n\) is the \(n\times n\) identity matrix. The matrix \(B\) is called the inverse of \(A\) and denoted \(A^{1}\text{.}\)
Notice that we only define invertibility for matrices that have the same number of rows and columns in which case we say that the matrix is square.
Example 3.1.2.
Suppose that \(A\) is the matrix that rotates twodimensional vectors counterclockwise by \(90^\circ\) and that \(B\) rotates vectors by \(90^\circ\text{.}\) We have
We can check that
which shows that \(A\) is invertible and that \(A^{1}=B\text{.}\)
Notice that if we multiply the matrices in the opposite order, we find that \(BA=I\text{,}\) which says that \(B\) is also invertible and that \(B^{1} = A\text{.}\) In other words, \(A\) and \(B\) are inverses of each other.
Activity 3.1.2.
This activity demonstrates a procedure for finding the inverse of a matrix \(A\text{.}\)

Suppose that \(A = \begin{bmatrix} 3 \amp 2 \\ 1 \amp 1 \\ \end{bmatrix} \text{.}\) To find an inverse \(B\text{,}\) we write its columns as \(B = \begin{bmatrix}\bvec_1 \amp \bvec_2 \end{bmatrix}\) and require that
\begin{equation*} \begin{aligned} AB \amp = I \\ \begin{bmatrix} A\bvec_1 \amp A\bvec_2 \end{bmatrix} \amp = \begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \\ \end{bmatrix}. \end{aligned} \end{equation*}In other words, we can find the columns of \(B\) by solving the equations
\begin{equation*} A\bvec_1 = \twovec10,~~~ A\bvec_2 = \twovec01. \end{equation*}Solve these equations to find \(\bvec_1\) and \(\bvec_2\text{.}\) Then write the matrix \(B\) and verify that \(AB=I\text{.}\) This is enough for us to conclude that \(B\) is the inverse of \(A\text{.}\)
Find the product \(BA\) and explain why we now know that \(B\) is invertible and \(B^{1}=A\text{.}\)
What happens when you try to find the inverse of \(C = \begin{bmatrix} 2 \amp 1 \\ 4 \amp 2 \\ \end{bmatrix}\text{?}\)

We now develop a condition that must be satisfied by an invertible matrix. Suppose that \(A\) is an invertible \(n\times n\) matrix with inverse \(B\) and suppose that \(\bvec\) is any \(n\)dimensional vector. Since \(AB=I\text{,}\) we have
\begin{equation*} A(B\bvec) = (AB)\bvec = I\bvec = \bvec. \end{equation*}This says that the equation \(A\xvec = \bvec\) is consistent and that \(\xvec=B\bvec\) is a solution.
Since we know that \(A\xvec = \bvec\) is consistent for any vector \(\bvec\text{,}\) what does this say about the span of the columns of \(A\text{?}\)
Since \(A\) is a square matrix, what does this say about the pivot positions of \(A\text{?}\) What is the reduced row echelon form of \(A\text{?}\)

In this activity, we have studied the matrices
\begin{equation*} A = \begin{bmatrix} 3 \amp 2 \\ 1 \amp 1 \\ \end{bmatrix},~~~ C = \begin{bmatrix} 2 \amp 1 \\ 4 \amp 2 \\ \end{bmatrix}. \end{equation*}Find the reduced row echelon form of each and explain how those forms enable us to conclude that one matrix is invertible and the other is not.
Example 3.1.3.
We can reformulate this procedure for finding the inverse of a matrix. For the sake of convenience, suppose that \(A\) is a \(2\times2\) invertible matrix with inverse \(B=\begin{bmatrix} \bvec_1 \amp \bvec_2 \end{bmatrix}\text{.}\) Rather than solving the equations
separately, we can solve them at the same time by augmenting \(A\) by both vectors \(\twovec10\) and \(\twovec01\) and finding the reduced row echelon form.
For example, if \(A = \begin{bmatrix} 1 \amp 2 \\ 1 \amp 1 \\ \end{bmatrix}\text{,}\) we form
This shows that the matrix \(B = \begin{bmatrix} 1 \amp 2 \\ 1 \amp 1 \\ \end{bmatrix}\) is the inverse of \(A\text{.}\)
In other words, beginning with \(A\text{,}\) we augment by the identify and find the reduced row echelon form to determine \(A^{1}\text{:}\)
In fact, this reformulation will always work. Suppose that \(A\) is an invertible \(n\times n\) matrix with inverse \(B\text{.}\) Suppose furthermore that \(\bvec\) is any \(n\)dimensional vector and consider the equation \(A\xvec = \bvec\text{.}\) We know that \(x=B\bvec\) is a solution because \(A(B\bvec) = (AB)\bvec = I\bvec = \bvec.\)
Proposition 3.1.4.
If \(A\) is an invertible matrix with inverse \(B\text{,}\) then any equation \(A\xvec = \bvec\) is consistent and \(\xvec = B\bvec\) is a solution. In other words, the solution to \(A\xvec = \bvec\) is \(\xvec = A^{1}\bvec\text{.}\)
Notice that this is similar to saying that the solution to \(3x=5\) is \(x = \frac13\cdot 5\text{,}\) as we saw in the preview activity.
Now since \(A\xvec=\bvec\) is consistent for every vector \(\bvec\text{,}\) the columns of \(A\) must span \(\real^n\) so there is a pivot position in every row. Since \(A\) is also square, this means that the reduced row echelon form of \(A\) is the identity matrix.
Proposition 3.1.5.
The matrix \(A\) is invertible if and only if the reduced row echelon form of \(A\) is the identity matrix: \(A\sim I\text{.}\) In addition, we can find the inverse by augmenting \(A\) by the identity and finding the reduced row echelon form:
You may have noticed that Proposition 3.1.4 says that the solution to the equation \(A\xvec = \bvec\) is \(\xvec = A^{1}\bvec\text{.}\) Indeed, we know that this equation has a unique solution because \(A\) has a pivot position in every column.
It is important to remember that the product of two matrices depends on the order in which they are multiplied. That is, if \(C\) and \(D\) are matrices, then it sometimes happens that \(CD \neq DC\text{.}\) However, something fortunate happens when we consider invertibility. It turns that if \(A\) is an \(n\times n\) matrix and that \(AB=I\text{,}\) then it is also true that \(BA=I\text{.}\) We have verified this in a few examples so far, and Exercise 3.1.5.12 explains why it always happens. This leads to the following proposition.
Proposition 3.1.6.
If \(A\) is a \(n\times n\) invertible matrix with inverse \(B\text{,}\) then \(BA=I\text{,}\) which tells us that \(B\) is invertible with inverse \(A\text{.}\) In other words,
Subsection 3.1.2 Solving equations with an inverse
If \(A\) is an invertible matrix, then Proposition 3.1.4 shows us how to use \(A^{1}\) to solve equations involving \(A\text{.}\) In particular, the solution to \(A\xvec = \bvec\) is \(\xvec = A^{1}\bvec\text{.}\)
Activity 3.1.3.
We'll begin by considering the square matrix
Describe the solution space to the equation \(A\xvec = \threevec343\) by augmenting \(A\) and finding the reduced row echelon form.
Using Proposition 3.1.5, explain why \(A\) is invertible and find its inverse.
Now use the inverse to solve the equation \(A\xvec = \threevec343\) and verify that your result agrees with what you found in part a.

If you have defined a matrix
B
in Sage, you can find it's inverse asB.inverse()
orB^1
. Use Sage to find the inverse of the matrix\begin{equation*} B = \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 1 \amp 5 \amp 6 \\ 5 \amp 4 \amp 6 \\ \end{array}\right] \end{equation*}and use it to solve the equation \(B\xvec = \threevec83{36}\text{.}\)
If \(A\) and \(B\) are the two matrices defined in this activity, find their product \(AB\) and verify that it is invertible.
Compute the products \(A^{1}B^{1}\) and \(B^{1}A^{1}\text{.}\) Which one agrees with \((AB)^{1}\text{?}\)

Explain your finding by considering the product
\begin{equation*} (AB)(B^{1}A^{1}) \end{equation*}and using associativity to regroup the products so that the middle two terms are multiplied first.
The next proposition summarizes much of what we have found about invertible matrices.
Proposition 3.1.7. Properties of invertible matrices.
An \(n\times n\) matrix \(A\) is invertible if and only if \(A\sim I\text{.}\)
If \(A\) is invertible, then the solution to the equation \(A\xvec = \bvec\) is given by \(\xvec = A^{1}\bvec\text{.}\)

We can find \(A^{1}\) by finding the reduced row echelon form of \(\left[\begin{array}{rr} A \amp I \end{array}\right]\text{;}\) namely,
\begin{equation*} \left[\begin{array}{rr} A \amp I \end{array}\right] \sim \left[\begin{array}{rr} I \amp A^{1} \end{array}\right]\text{.} \end{equation*} If \(A\) and \(B\) are two invertible \(n\times n\) matrices, then their product \(AB\) is also invertible and \((AB)^{1} = B^{1}A^{1}\text{.}\)
There is a simple formula for finding the inverse of a \(2\times2\) matrix:
which can be easily checked. The condition that \(A\) be invertible is, in this case, reduced to the condition that \(adbc\neq 0\text{.}\) We will understand this condition better once we have explored determinants in Section 3.4. There is a similar formula for the inverse of a \(3\times 3\) matrix, but there is not a good reason to write it here.
Subsection 3.1.3 Triangular matrices and Gaussian elimination
With some of the ideas we've developed, we can recast the Gaussian elimination algorithm in terms of matrix multiplication and invertibility. This will be especially helpful later when we consider determinants and LU factorizations. Triangular matrices will play an important role.
Definition 3.1.8.
We say that a matrix \(A\) is lower triangular if all its entries above the diagonal are zero. Similarly, \(A\) is upper triangular if all the entries below the diagonal are zero.
For example, the matrix \(L\) below is a lower triangular matrix while \(U\) is an upper triangular one.
We can develop a simple test to determine whether an \(n\times n\) lower triangular matrix is invertible. Let's use Gaussian elimination to find the reduced row echelon form of the lower triangular matrix
Because the entries on the diagonal are nonzero, we find a pivot position in every row, which tells us that the matrix is invertible.
If, however, there is a zero entry on the diagonal, the matrix cannot be invertible. Considering the matrix below, we see that having a zero on the diagonal leads to a row without a pivot position.
Proposition 3.1.9.
An \(n\times n\) triangular matrix is invertible if and only if the entries on the diagonal are all nonzero.
Activity 3.1.4. Gaussian elimination and matrix multiplication.
This activity explores how the row operations of scaling, interchange, and replacement can be performed using matrix multiplication.
As an example, we consider the matrix
and apply a replacement operation that multiplies the first row by \(2\) and adds it to the second row. Rather than performing this operation in the usual way, we construct a new matrix by applying the desired replacement operation to the identity matrix. To illustrate, we begin with the identity matrix
and form a new matrix by multiplying the first row by \(2\) and adding it to the second row to obtain
Show that the product \(RA\) is the result of applying the replacement operation to \(A\text{.}\)
Explain why \(R\) is invertible and find its inverse \(R^{1}\text{.}\)
Describe the relationship between \(R\) and \(R^{1}\) and use the connection to replacement operations to explain why it holds.
Other row operations can be performed using a similar procedure. For instance, suppose we want to scale the second row of \(A\) by \(4\text{.}\) Find a matrix \(S\) so that \(SA\) is the same as that obtained from the scaling operation. Why is \(S\) invertible and what is \(S^{1}\text{?}\)
Finally, suppose we want to interchange the first and third rows of \(A\text{.}\) Find a matrix \(P\text{,}\) usually called a permutation matrix that performs this operation. What is \(P^{1}\text{?}\)

The original matrix \(A\) is seen to be row equivalent to the upper triangular matrix \(U\) by performing three replacement operations on \(A\text{:}\)
\begin{equation*} A = \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 2 \amp 0 \amp 2 \\ 1 \amp 2 \amp 1 \\ \end{array}\right] \sim \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 0 \amp 4 \amp 4 \\ 0 \amp 0 \amp 4 \\ \end{array}\right] = U. \end{equation*}Find the matrices \(L_1\text{,}\) \(L_2\text{,}\) and \(L_3\) that perform these row replacement operations so that \(L_3L_2L_1 A = U\text{.}\)
Explain why the matrix product \(L_3L_2L_1\) is invertible and use this fact to write \(A = LU\text{.}\) What is the matrix \(L\) that you find? Why do you think we denote it by \(L\text{?}\)
The following are examples of matrices, known as elementary matrices, that perform the row operations on a matrix having three rows.
 Replacement

Multiplying the second row by 3 and adding it to the third row is performed by
\begin{equation*} L = \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 3 \amp 1 \\ \end{bmatrix}. \end{equation*}We often use \(L\) to describe these matrices because they are lower triangular.
 Scaling

Multiplying the third row by 2 is performed by
\begin{equation*} S = \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 2 \\ \end{bmatrix}. \end{equation*}  Interchange

Interchanging the first two rows is performed by
\begin{equation*} P = \begin{bmatrix} 0 \amp 1 \amp 0 \\ 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \\ \end{bmatrix}. \end{equation*}
Example 3.1.10.
Suppose we have
For the forward substitution phase of Gaussian elimination, we perform a sequence of three replacement operations. The first replacement operation multiplies the first row by \(3\) and adds the result to the second row. We can perform this operation by multiplying \(A\) by the lower triangular matrix \(L_1\) where
The next two replacement operations are performed by the matrices
so that
Notice that the inverse of \(L_1\) has the simple form:
This says that if we want to undo the operation of multiplying the first row by \(3\) and adding to the second row, we should multiply the first row by \(3\) and add it to the second row. That is the effect of \(L_1^{1}\text{.}\)
Notice that we now have \(L_3L_2L_1A = U\text{,}\) which gives
where \(L\) is the lower triangular matrix
This way of writing \(A=LU\) as the product of a lower and an upper triangular matrix is known as an \(LU\) factorization of \(A\text{,}\) and its usefulness will be explored in Section 5.1.
Subsection 3.1.4 Summary
In this section, we found conditions guaranteeing that a matrix has an inverse. When these conditions hold, we also found an algorithm for finding the inverse.
A square matrix is invertible if there is a matrix \(B\text{,}\) known as the inverse of \(A\text{,}\) such that \(AB = I\text{.}\) We usually write \(A^{1} = B\text{.}\)
The \(n\times n\) matrix \(A\) is invertible if and only if it is row equivalent to \(I_n\text{,}\) the \(n\times n\) identity matrix.

If a matrix \(A\) is invertible, we can use Gaussian elimination to find its inverse:
\begin{equation*} \left[\begin{array}{rr} A \amp I \end{array}\right] \sim \left[\begin{array}{rr} I \amp A^{1} \end{array}\right]\text{.} \end{equation*} If a matrix \(A\) is invertible, then the solution to the equation \(A\xvec = \bvec\) is \(\xvec = A^{1}\bvec\text{.}\)
The row operations of replacement, scaling, and interchange can be performed by multiplying by elementary matrices.
Exercises 3.1.5 Exercises
1.
Consider the matrix
Explain why \(A\) has an inverse.
Find the inverse of \(A\) by augmenting by the identity \(I\) to form \(\left[\begin{array}{rr}A \amp I \end{array}\right]\text{.}\)
Use your inverse to solve the equation \(A\xvec = \fourvec{3}{2}{3}{1}\text{.}\)
2.
In this exercise, we will consider \(2\times 2\) matrices as defining matrix transformations.
Write the matrix \(A\) that performs a \(45^\circ\) rotation. What geometric operation undoes this rotation? Find the matrix that perform this operation and verify that it is \(A^{1}\text{.}\)
Write the matrix \(A\) that performs a \(180^\circ\) rotation. Verify that \(A^2 = I\) so that \(A^{1} = A\text{,}\) and explain geometrically why this is the case.
Find three more matrices \(A\) that satisfy \(A^2 = I\text{.}\)
3.
Inverses for certain types of matrices can be found in a relatively straightforward fashion.

The matrix \(D=\begin{bmatrix} 2 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 4 \\ \end{bmatrix}\) is called diagonal since the only nonzero entries are on the diagonal of the matrix.
Find \(D^{1}\) by augmenting \(D\) by the identity and finding its reduced row echelon form.
Under what conditions is a diagonal matrix invertible?
Explain why the inverse of a diagonal matrix is also diagonal and explain the relationship between the diagonal entries in \(D\) and \(D^{1}\text{.}\)

Consider the lower triangular matrix \(L = \begin{bmatrix} 1 \amp 0 \amp 0 \\ 2 \amp 1 \amp 0 \\ 3 \amp 4 \amp 1 \\ \end{bmatrix} \text{.}\)
Find \(L^{1}\) by augmenting \(L\) by the identity and finding its reduced row echelon form.
Explain why the inverse of a lower triangular matrix is also lower triangular.
4.
Our definition of an invertible matrix requires that \(A\) be a square \(n\times n\) matrix. Let's examine what happens when \(A\) is not square. For instance, suppose that
Verify that \(BA = I_2\text{.}\) In this case, we say that \(B\) is a left inverse of \(A\text{.}\)

If \(A\) has a left inverse \(B\text{,}\) we can still use it to find solutions to linear equations. If we know there is a solution to the equation \(A\xvec = \bvec\text{,}\) we can multiply both sides of the equation by \(B\) to find \(\xvec = B\bvec\text{.}\)
Suppose you know there is a solution to the equation \(A\xvec = \threevec{1}{3}{6}\text{.}\) Use the left inverse \(B\) to find \(\xvec\) and verify that it is a solution.

Now consider the matrix
\begin{equation*} C = \left[\begin{array}{rrr} 1 \amp 1 \amp 0 \\ 2 \amp 1 \amp 0 \\ \end{array}\right] \end{equation*}and verify that \(C\) is also a left inverse of \(A\text{.}\) This shows that the matrix \(A\) may have more than one left inverse.
5.
If a matrix \(A\) is invertible, there is a sequence of row operations that transforms \(A\) into the identity matrix \(I\text{.}\) We have seen that every row operation can be performed by matrix multiplication. If the \(j^{th}\) step in the Gaussian elimination process is performed by multiplying by \(E_j\text{,}\) then we have
which means that
For each of the following matrices, find a sequence of row operations that transforms the matrix to the identity \(I\text{.}\) Write the matrices \(E_j\) that perform the steps and use them to find \(A^{1}\text{.}\)
 \begin{equation*} A = \left[\begin{array}{rrr} 0 \amp 2 \amp 0 \\ 3 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \\ \end{array}\right]\text{.} \end{equation*}
 \begin{equation*} A = \left[\begin{array}{rrrr} 1 \amp 0 \amp 0 \amp 0 \\ 2 \amp 1 \amp 0 \amp 0 \\ 0 \amp 3 \amp 1 \amp 0 \\ 0 \amp 0 \amp 2 \amp 1 \\ \end{array}\right]\text{.} \end{equation*}
 \begin{equation*} A = \left[\begin{array}{rrr} 1 \amp 1 \amp 1 \\ 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 2 \\ \end{array}\right]\text{.} \end{equation*}
6.
Suppose that \(A\) is an \(n\times n\) matrix.
Suppose that \(A^2 = AA\) is invertible with inverse \(B\text{.}\) This means that \(A^2B = AAB = I\text{.}\) Explain why \(A\) must be invertible with inverse \(AB\text{.}\)
Suppose that \(A^{100}\) is invertible with inverse \(B\text{.}\) Explain why \(A\) is invertible. What is \(A^{1}\) in terms of \(A\) and \(B\text{?}\)
7.
Determine whether the following statements are true or false and explain your reasoning.
If \(A\) is invertible, then the columns of \(A\) are linearly independent.
If \(A\) is a square matrix whose diagonal entries are all nonzero, then \(A\) is invertible.
If \(A\) is an invertible \(n\times n\) matrix, then span of the columns of \(A\) is \(\real^n\text{.}\)
If \(A\) is invertible, then there is a nonzero solution to the homogeneous equation \(A\xvec = \zerovec\text{.}\)
If \(A\) is an \(n\times n\) matrix and the equation \(A\xvec = \bvec\) has a solution for every vector \(\bvec\text{,}\) then \(A\) is invertible.
8.
Provide a justification for your response to the following questions.
Suppose that \(A\) is a square matrix with two identical columns. Can \(A\) be invertible?
Suppose that \(A\) is a square matrix with two identical rows. Can \(A\) be invertible?
Suppose that \(A\) is an invertible matrix and that \(AB = AC\text{.}\) Can you conclude that \(B = C\text{?}\)
Suppose that \(A\) is an invertible \(n\times n\) matrix. What can you say about the span of the columns of \(A^{1}\text{?}\)
Suppose that \(A\) is an invertible matrix and that \(B\) is row equivalent to \(A\text{.}\) Can you guarantee that \(B\) is invertible?
9.
Suppose that we start with the \(3\times3\) matrix \(A\text{,}\) perform the following sequence of row operations:
Multiply row 1 by 2 and add to row 2.
Multiply row 1 by 4 and add to row 3.
Scale row 2 by \(1/2\text{.}\)
Multiply row 2 by 1 and add to row 3,
and arrive at the upper triangular matrix
Write the matrices \(E_1\text{,}\) \(E_2\text{,}\) \(E_3\text{,}\) and \(E_4\) that perform the four row operations.
Find the matrix \(E = E_4E_3E_2E_1\text{.}\)
We then have \(E_4E_3E_2E_1 A = EA = U\text{.}\) Now that we have the matrix \(E\text{,}\) find the original matrix \(A = E^{1}U\text{.}\)
10.
We say that two square matrices \(A\) and \(B\) are similar if there is an invertible matrix \(P\) such that \(B = PAP^{1}\text{.}\)
If \(A\) and \(B\) are similar, explain why \(A^2\) and \(B^2\) are similar as well. In particular, if \(B = PAP^{1}\text{,}\) explain why \(B^2 = PA^2P^{1}\text{.}\)
If \(A\) and \(B\) are similar and \(A\) is invertible, explain why \(B\) is also invertible.
If \(A\) and \(B\) are similar and both are invertible, explain why \(A^{1}\) and \(B^{1}\) are similar.
If \(A\) is similar to \(B\) and \(B\) is similar to \(C\text{,}\) explain why \(A\) is similar to \(C\text{.}\) To begin, you may wish to assume that \(B = PAP^{1}\) and \(C = QBQ^{1}\text{.}\)
11.
Suppose that \(A\) and \(B\) are two \(n\times n\) matrices and that \(AB\) is invertible. We would like to explain why both \(A\) and \(B\) are invertible.

We first explain why \(B\) is invertible.
Since \(AB\) is invertible, explain why any solution to the homogeneous equation \(AB\xvec = \zerovec\) is \(\xvec=\zerovec\text{.}\)
Use this fact to explain why any solution to \(B\xvec = \zerovec\) must be \(\xvec=\zerovec\text{.}\)
Explain why \(B\) must be invertible.

Now we explain why \(A\) is invertible.
Since \(AB\) is invertible, explain why the equation \(AB\xvec=\bvec\) is consistent for every vector \(\bvec\text{.}\)
Using the fact that \(AB\xvec = A(B\xvec) = \bvec\) is consistent for every \(\bvec\text{,}\) explain why every equation \(A\xvec = \bvec\) is consistent.
Explain why \(A\) must be invertible.
12.
We defined an \(n\times n\) matrix to be invertible if there is a matrix \(B\) such that \(AB=I_n\text{.}\) In this exercise, we will explain why it is also true that \(BA = I\text{,}\) which is the statement of Proposition 3.1.6. This means that, if \(B=A^{1}\text{,}\) then \(A = B^{1}\text{.}\)
Suppose that \(\xvec\) is an \(n\)dimensional vector. Since \(AB=I\text{,}\) explain why \(AB\xvec = \xvec\) and use this to explain why the only vector for which \(B\xvec = \zerovec\) is \(\xvec = \zerovec\text{.}\)
Explain why this implies that \(B\) must be invertible. We will call the inverse \(C\) so that \(BC = I\text{.}\)
Beginning with \(AB = I\text{,}\) explain why \(B(AB)C = BIC\) and why this tells us that \(BA = I\text{.}\)