Skip to main content

Chapter 2 Standard forms

In our earlier course, we investigated diagonalizable matrices and used them to simplify a variety of problems. Remember we said that the matrix \(A\) is diagonalizable if there is a diagonal matrix \(D\) and an invertible matrix \(P\) such that \(A=PDP^{-1}\text{.}\) We would now say that \(A\) is similar to the diagonal matrix \(D\text{.}\)
We also saw that not every matrix is diagonalizable. However, when \(A\) is symmetric, then \(A\) is not only diagonalizable, but it is orthogonally diagonalizable, meaning there is an orthogonal matrix \(Q\) such that \(A=QDQ^T\text{.}\) This was an important fact that led to many interesting ideas, and we called it the Spectral Theorem.
We can rephrase the Spectral Theorem now that we are investigating vector spaces and linear transformations. In particular, if \(T\) is an operator that is represented by a symmetric matrix \(A\) in some basis, then there is another basis for which the matrix representing \(T\) is diagonal.
This chapter presents a proof of the Spectral Theorem. Furthermore, we will go further and study operators to which the Spectral Theorem does not apply. In this case, can we find a basis such that the matrix representing the transformation has a simple form? For example, if we cannot represent \(T\) with a diagonal matrix, perhaps we can represent it with an upper-triangular one.