ATTENTION / DISCLAIMER: This document was authored by Prof. Dr. Ali Özgür Kişisel (METU Mathematics Department), not by Selim Kaan Ozsoy. This .tex version was generated from the original PDF lecture notes using AI.

MATH 219 Spring 2025 Lecture 7 & 8 Lecture notes by Özgür Kişisel

Content: Systems of linear algebraic equations; Linear independence, eigenvalues, eigenvectors.

Suggested Problems: (Boyce, Di Prima, 10th edition)

  • §7.3: 3, 4, 8, 14, 15, 17, 22, 25, 26, 32

In these two lectures we will briefly go through some topics from linear algebra that we will need in the subsequent lectures about systems of first order linear differential equations.


1. Matrices of functions

Just like matrices whose entries are numbers, one can consider matrices $A(t)=(a_{ij}(t))$ whose entries are functions. It then makes sense to define differentiation and integration of matrices entry by entry. These are direct generalizations of differentiation and integration of vector functions:

$$ \frac{dA}{dt}=\left(\frac{da_{ij}}{dt}\right) \quad \text{and} \quad \int A(t)dt=\left(\int a_{ij}(t)dt\right) $$

Example 1.1

Say $$ A=\begin{bmatrix}3t & 1 \\ 0 & e^{t}\end{bmatrix} $$ Then, $$ \frac{dA}{dt}=\begin{bmatrix}3 & 0 \\ 0 & e^{t}\end{bmatrix} \quad \text{and} \quad \int A(t)dt=\begin{bmatrix}3t^{2}/2+c_{1} & t+c_{2} \\ c_{3} & e^{t}+c_{4}\end{bmatrix} $$


2. Polynomials of matrices, matrix exponentials

We defined basic arithmetic operations on matrices in the previous lecture. Using these operations, it is easy to define polynomial functions of a matrix $A$. For example, if $p(x)=x^{3}-2x+5$ then $$ p(A)=A^{3}-2A+5I $$ where $I$ denotes the identity matrix. More complicated functions of $A$ can also be defined by using Taylor expansions. For example recall from MATH 120 that for all $x\in\mathbb{R}$ the following equality holds: $$ e^{x}=\sum_{n=0}^{\infty}\frac{x^{n}}{n!}=1+x+\frac{x^{2}}{2}+\frac{x^{3}}{3!}+… $$ Let $A$ be an $n\times n$ matrix. Let $A^{n}=A\cdot A\cdot…\cdot A$. We define $$ e^{At}=\sum_{n=0}^{\infty}\frac{A^{n}t^{n}}{n!}=I+At+\frac{A^{2}}{2}t^{2}+… $$ In order to make sure that the above formula makes sense, it is necessary to check that each infinite sum involved in the computation of each of the entries of the matrix above converges. This turns out to be true for any constant matrix $A$, although we will not prove this fact here. Therefore $e^{At}$ is defined for any constant matrix $A$, and it is itself a certain $n\times n$ matrix of functions. On the other hand, it is unclear at this point how one can actually compute $e^{At}$, because computing all the matrices $A^{n}$ entry by entry, and then summing up all these infinite sums one by one is a formidable task. There are better ways to do this and we will return to this problem later.


3. Standard inner product of vectors

Suppose that $x$ and $y$ are two row vectors of length $n$ with complex entries. Then their (standard) inner product is defined to be $$ \langle x,y\rangle=xy^{} $$ The matrix $x$ is $1\times n$ and $y^{}$ is $n\times1$, therefore the result is $1\times1$, hence it is a single complex number. If $x=[x_{1},x_{2},…,x_{n}]$ and $y=[y_{1},y_{2},…,y_{n}]$ then $$ \langle x,y\rangle=\sum_{i=1}^{n}x_{i}\overline{y}_{i} $$

Proposition 3.1

The inner product defined above has the following properties:

  • $\langle x,y\rangle=\overline{\langle y,x\rangle}$,
  • $\langle x,y+z\rangle=\langle x,y\rangle+\langle x,z\rangle$ and $\langle x+y,z\rangle=\langle x,z\rangle+\langle y,z\rangle$,
  • $\langle cx,y\rangle=c\langle x,y\rangle$
  • $\langle x,cy\rangle=\overline{c}\langle x,y\rangle$,
  • $\langle x,x\rangle\ge0$
  • $\langle x,x\rangle=0\Leftrightarrow x=0$.

Proof: Exercise. $\square$

Definition 3.1

Two vectors $x$ and $y$ are said to be orthogonal (or perpendicular) if $\langle x,y\rangle=0$.

Example

Say $x=[1,1+i,-3]$ and $y=[a,-1,i]$. For which values of $a$ are $x$ and $y$ orthogonal?

Solution: $$ \langle x,y\rangle=(1)(\overline{a})+(1+i)(-1)+(-3)(-i) = \overline{a}-1+2i $$ therefore the inner product is zero if and only if $a=1+2i$. This is the only value of $a$ that makes the two vectors orthogonal. $\square$


4. Systems of algebraic equations

Before dealing with linear systems of ODE’s, let us review systems of linear algebraic equations. Let $x_{1},x_{2},…,x_{n}$ be $n$ variables. A set of $m$ equations of the form

$$ \begin{aligned} a_{11}x_{1}+a_{12}x_{2}+…+a_{1n}x_{n} &= b_{1} \\ a_{21}x_{1}+a_{22}x_{2}+…+a_{2n}x_{n} &= b_{2} \\ &\vdots \\ a_{m1}x_{1}+a_{m2}x_{2}+…+a_{mn}x_{n} &= b_{m} \end{aligned} $$

is called an $m\times n$ system of linear algebraic equations. One would like to find or describe the set of all solutions to this system. It is convenient to write this system in matrix form

$$ Ax=b $$

where $A=(a_{ij})$ is the $m\times n$ matrix of coefficients, $x=(x_{1},x_{2},…,x_{n})^{T}$ is an $n\times1$ column vector and $b=(b_{1},b_{2},…,b_{m})^{T}$ is an $m\times1$ column vector. In this equation, $A$ and $b$ are given, $x$ is the unknown.

4.1 Elementary row operations, matrices in echelon form

We can solve a given linear system by eliminating variables until the system becomes simple enough so that we can easily read the solutions. A systematic way of doing this is Gaussian elimination. We do not need to rewrite the whole system at each step since all that we need is $A$ and $b$. We will write these matrices together in the so called “augmented form”:

$$ [A|b] $$

It will be this augmented matrix that we will apply the elimination steps to. There are three types of basic steps used in the process, called elementary row operations:

  • Type 1: Multiply a row by a nonzero constant (denoted by $cR_{i}\rightarrow R_{i}$. Here $c\ne0$)
  • Type 2: Interchange two rows (denoted by $R_{i}\leftrightarrow R_{j}$. Here $i\ne j$.)
  • Type 3: Add a constant multiple of one row to another row (denoted by $aR_{i}+R_{j}\rightarrow R_{j}$. Here $i\ne j$.)

Elementary row operations have the important property that they are invertible. An important corollary of this fact is that elementary row operations do not change the solution set of a linear system. Indeed, let us see the inverse operations for each type:

  • The inverse operation of $cR_{i}\rightarrow R_{i}$ is $\frac{1}{c}R_{i}\rightarrow R_{i}$
  • The inverse operation of $R_{i}\leftrightarrow R_{j}$ is $R_{i}\leftrightarrow R_{j}$.
  • The inverse operation of $aR_{i}+R_{j}\rightarrow R_{j}$ is $-aR_{i}+R_{j}\rightarrow R_{j}$

Using elementary row operations, we can eliminate some of the variables in some of the equations and simplify the system. How do we know that we are at a good point to stop? This should be such a point that reading the solutions from the final state should be very easy and should not require too much extra work. All of these considerations motivate the following definition:

Definition 4.1

A matrix is said to be in row echelon form if it satisfies the following properties:

  1. The first non-zero entry of each row is 1 (such elements are called leading 1’s). A row of all 0’s is also allowed.
  2. All entries which are directly below a leading 1 are 0.
  3. If $i<j$ then the leading 1 on row $i$ must be to the left of the leading 1 on row $j$.
  4. If there are $k$ rows of 0’s in the matrix, then these must be the last $k$ rows.

Example 4.1

The matrices

$$ \begin{bmatrix}1 & 1 & 2 \\ 0 & 0 & 1\end{bmatrix} \quad \text{and} \quad \begin{bmatrix}1 & 0 \\ 0 & 0 \\ 0 & 0\end{bmatrix} $$

are in row echelon form. However, the matrices

$$ \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}, \quad \begin{bmatrix}0 & 1 \\ 0 & 1\end{bmatrix}, \quad \text{and} \quad \begin{bmatrix}5 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0\end{bmatrix} $$

are not in row echelon form.

Theorem 4.1

Any matrix can be row reduced by using a sequence of elementary row operations until a matrix in row echelon form is obtained.

Proof: Omitted (but it is not a difficult proof). $\square$

4.2 Gaussian elimination

Now, we are ready to explain the Gaussian elimination algorithm for solving a given linear system of algebraic equations:

  • Form the augmented matrix $[A|b]$.
  • Apply elementary row operations to $[A|b]$ until a matrix $[E|d]$ is obtained, where $E$ is in row echelon form.
  • The columns of $E$ that do not contain a leading 1 correspond to free variables.
  • The columns of $E$ containing leading 1’s should be expressed in terms of the free variables.

We note that neither the echelon form, nor the sequence of operations chosen to access it are uniquely determined. The choice of free variables above is also a convention and this is not the only way to do it. However, one gets the same solution set for the system at the end no matter which choices are made.

Example 4.2

Solve the system of equations

$$ \begin{aligned} 2x_{1}-4x_{2}+3x_{3}-x_{4}+x_{5} &= 2 \\ x_{1}-2x_{2}+x_{4}+2x_{5} &= 4 \\ 4x_{1}-8x_{2}+3x_{3}+x_{4}+5x_{5} &= 10 \end{aligned} $$

Solution: First, convert the equation into matrix form $Ax=b.$ Form the augmented matrix $[A|b]$ and apply elementary row operations:

$$ \begin{bmatrix} 2 & -4 & 3 & -1 & 1 & | & 2 \\ 1 & -2 & 0 & 1 & 2 & | & 4 \\ 4 & -8 & 3 & 1 & 5 & | & 10 \end{bmatrix} \xrightarrow{R_{1}\leftrightarrow R_{2}} \begin{bmatrix} 1 & -2 & 0 & 1 & 2 & | & 4 \\ 2 & -4 & 3 & -1 & 1 & | & 2 \\ 4 & -8 & 3 & 1 & 5 & | & 10 \end{bmatrix} $$

$$ \xrightarrow[-4R_{1}+R_{3}\rightarrow R_{3}]{-2R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix} 1 & -2 & 0 & 1 & 2 & | & 4 \\ 0 & 0 & 3 & -3 & -3 & | & -6 \\ 0 & 0 & 3 & -3 & -3 & | & -6 \end{bmatrix} $$

$$ \xrightarrow{R_{2}/3\rightarrow R_{2}} \begin{bmatrix} 1 & -2 & 0 & 1 & 2 & | & 4 \\ 0 & 0 & 1 & -1 & -1 & | & -2 \\ 0 & 0 & 3 & -3 & -3 & | & -6 \end{bmatrix} $$

$$ \xrightarrow{-3R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix} 1 & -2 & 0 & 1 & 2 & | & 4 \\ 0 & 0 & 1 & -1 & -1 & | & -2 \\ 0 & 0 & 0 & 0 & 0 & | & 0 \end{bmatrix} $$

The final matrix is in row echelon form. Since columns 2, 4 and 5 do not contain leading 1’s, we set $x_{2}$, $x_{4}$ and $x_{5}$ to be free variables. Then, using the equation given by the second row,

$$ x_{3}-x_{4}-x_{5}=-2 \implies x_{3}=-2+x_{4}+x_{5}. $$

Finally, using the equation given by the first row,

$$ x_{1}-2x_{2}+x_{4}+2x_{5}=4 \implies x_{1}=4+2x_{2}-x_{4}-2x_{5} $$

Summarizing, we can write all solutions of the system in vector form as

$$ \begin{bmatrix}x_{1}\\x_{2}\\x_{3}\\x_{4}\\x_{5}\end{bmatrix} = \begin{bmatrix}4+2x_{2}-x_{4}-2x_{5}\\x_{2}\\-2+x_{4}+x_{5}\\x_{4}\\x_{5}\end{bmatrix}, \quad x_{2},x_{4},x_{5}\in\mathbb{R} $$

An alternative way to write this solution is

$$ \begin{bmatrix}x_{1}\\x_{2}\\x_{3}\\x_{4}\\x_{5}\end{bmatrix} = \begin{bmatrix}4\\0\\-2\\0\\0\end{bmatrix} + x_{2}\begin{bmatrix}2\\1\\0\\0\\0\end{bmatrix} + x_{4}\begin{bmatrix}-1\\0\\1\\1\\0\end{bmatrix} + x_{5}\begin{bmatrix}-2\\0\\1\\0\\1\end{bmatrix} $$

In geometric terms, this gives us parametric equations for a certain 3-dimensional (affine) linear space in $\mathbb{R}^{5}$ (please do not worry if this last sentence sounds obscure). $\square$

4.3 Effect of elementary row operations on determinants

Suppose that $A$ is an $n\times n$-matrix. Elementary row operations affect the determinant of $A$ in a predictable way:

Theorem 4.2

Let $A$ be an $n\times n$ matrix. Then,

  1. A row operation $cR_{i}\rightarrow R_{i}$ (with $c\ne0$) of type 1 multiplies the determinant of $A$ by $c$.
  2. A row operation $R_{i}\leftrightarrow R_{j}$ (with $i\ne j$) of type 2 multiplies the determinant of $A$ by $-1.$
  3. A row operation $aR_{i}+R_{j}\rightarrow R_{j}$ (with $i\ne j$) of type 3 does not change the determinant of $A$.

Proof: Omitted. $\square$

Let us note that if $\det(A)\ne0$, then after a sequence of elementary row operations the determinant of the new matrix will still remain non-zero. Similarly if $\det(A)=0$, then after each elementary row operation, the determinant will still be 0. Therefore elementary row operations do not change the invertibility of a matrix $A$.

4.4 Solutions of square systems

Suppose now that $A$ is an $n\times n$ matrix. We can reduce $A$ to row echelon form by applying the steps of Gaussian elimination. There are two alternatives:

  • After Gaussian elimination, some columns do not have leading 1’s, therefore there are free variables.
  • Row reduced echelon form of $A$ has a leading 1 on each column, in which case $A$ can be reduced to the identity matrix $I$ possibly after some further elementary row operations. In this case there are no free variables.

We can therefore deduce the following theorem:

Theorem 4.3

A matrix $A$ is invertible if and only if it can be row reduced to the identity matrix.

Proof: If $A$ can be row reduced to $I$ then $\det(A)$ cannot be zero. Hence $A$ is invertible. If $A$ cannot be row reduced to $I$, then its row reduced echelon form must have a row of zeros. Hence its determinant must be zero, consequently $A$ is not invertible. $\square$

Suppose now that we have a linear system $Ax=b$. If $A$ is invertible, then $$ A^{-1}(Ax)=A^{-1}b \implies x=A^{-1}b $$ therefore the system has a unique solution, regardless of what $b$ is. On the other hand, if $A$ is not invertible, then the row echelon form of $A$ has some columns without leading 1’s. Then we have two possibilities:

  • There are no solutions because of a contradictory equation of the form $0=c$ with $c$ a nonzero number,
  • There are no contradictory equations, and there are free variables. Therefore, there are infinitely many solutions.

Summarizing, including the invertible case, we have a total of three possibilities overall. An important special case is when $b=0$. Then we say that the system is homogenous. Since $A0=0$ the vector 0 is always a solution. Hence only two of the three possibilities above remain:

  • $A$ is invertible if and only if $Ax=0$ has a unique solution, namely $x=0$ (the trivial solution),
  • $A$ is not invertible if and only if $Ax=0$ has infinitely many solutions (the system is said to have non-trivial solutions).

4.5 Inverting an invertible matrix

Suppose that $A$ is an invertible $n\times n$ matrix. How can we compute its inverse in practice? By definition, we need to solve for $X$ in the equation $AX=I.$ Suppose that the columns of $X$ are $x_{1},…,x_{n}$ respectively. Then the equation $AX=I$ is equivalent to the $n$ equations

$$ Ax_{1}=\begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix}, \quad Ax_{2}=\begin{bmatrix}0\\1\\\vdots\\0\end{bmatrix}, \quad \dots, \quad Ax_{n}=\begin{bmatrix}0\\0\\\vdots\\1\end{bmatrix} $$

We can solve each of these linear systems by Gaussian elimination, and put together their results in order to get $X$. A way to do this is to augment $A$ by all columns of $I$ to get

$$ [A|I] $$

and apply elementary row operations until we obtain

$$ [I|A^{-1}]. $$

This is called the Gauss-Jordan method for finding $A^{-1}$. Incidentally, even if we do not know whether $A$ is invertible or not when we begin, the row echelon form of $A$ will reveal this information. Hence we can apply this algorithm without first checking invertibility.

Example

Find the inverse of the matrix $$ A=\begin{bmatrix}1 & 2 & 3 \\ 0 & 1 & 4 \\ 5 & 6 & 0\end{bmatrix} $$

Solution:

$$ [A|I]=\begin{bmatrix}1 & 2 & 3 & | & 1 & 0 & 0 \\ 0 & 1 & 4 & | & 0 & 1 & 0 \\ 5 & 6 & 0 & | & 0 & 0 & 1\end{bmatrix} \xrightarrow{-5R_{1}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & 2 & 3 & | & 1 & 0 & 0 \\ 0 & 1 & 4 & | & 0 & 1 & 0 \\ 0 & -4 & -15 & | & -5 & 0 & 1\end{bmatrix} $$

$$ \xrightarrow{4R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & 2 & 3 & | & 1 & 0 & 0 \\ 0 & 1 & 4 & | & 0 & 1 & 0 \\ 0 & 0 & 1 & | & -5 & 4 & 1\end{bmatrix} $$

$$ \xrightarrow[-4R_{3}+R_{2}\rightarrow R_{2}]{-2R_{2}+R_{1}\rightarrow R_{1}} \begin{bmatrix}1 & 0 & -5 & | & 1 & -2 & 0 \\ 0 & 1 & 0 & | & 20 & -15 & -4 \\ 0 & 0 & 1 & | & -5 & 4 & 1\end{bmatrix} $$

$$ \xrightarrow[5R_{3}+R_{1}\rightarrow R_{1}]{} \begin{bmatrix}1 & 0 & 0 & | & -24 & 18 & 5 \\ 0 & 1 & 0 & | & 20 & -15 & -4 \\ 0 & 0 & 1 & | & -5 & 4 & 1\end{bmatrix} $$

Therefore $A$ is invertible and

$$ A^{-1}=\begin{bmatrix}-24 & 18 & 5 \\ 20 & -15 & -4 \\ -5 & 4 & 1\end{bmatrix} $$ $\square$


5. Eigenvalues and eigenvectors of a square matrix

Definition 5.1

Suppose that $A$ is an $n\times n$ matrix. A vector $v\ne0$ is called an eigenvector for $A$ if $$ Av=\lambda v $$ for some number $\lambda$. In this case, $\lambda$ is called the eigenvalue associated to $v$.

We can visualize multiplication by $A$ as a map taking each vector in $\mathbb{R}^{n}$ to some vector in $\mathbb{R}^{n}$. Then the eigenvectors of $A$ are those vectors whose direction is unchanged or reversed by $A$. The eigenvalue represents the scaling factor of the vector under such a map. For instance, if $0<\lambda<1$, then the vector is shrunk. If $\lambda<0,$ then its direction is reversed. If $\lambda=0$, then the vector is sent to 0 (“killed”). We do not consider 0 as an eigenvector itself since its direction is ambiguous to start with. However, the number $\lambda=0$ is a valid eigenvalue.

It is a rather special condition for a given vector to be an eigenvector of $A$ since multiplication by $A$ is likely to change directions of most vectors. The intuition behind studying eigenvalues and eigenvectors is the hope that these special vectors and scaling factors might contain key information about the matrix $A$. This intuition turns out to be correct.

Example

The vector $v=\begin{bmatrix}5\\5\end{bmatrix}$ is an eigenvector for the matrix $\begin{bmatrix}2 & 1 \\ 1 & 2\end{bmatrix}$ since

$$ Av=\begin{bmatrix}2 & 1 \\ 1 & 2\end{bmatrix}\begin{bmatrix}5 \\ 5\end{bmatrix}=\begin{bmatrix}15 \\ 15\end{bmatrix}=3\begin{bmatrix}5 \\ 5\end{bmatrix} $$

and the eigenvalue for $v$ is 3. The vector $w=\begin{bmatrix}1\\-1\end{bmatrix}$ is also an eigenvector for the same matrix $A$ since $Aw=w$. The eigenvalue for $w$ is then 1. However, the vector $u=\begin{bmatrix}1\\4\end{bmatrix}$ is not an eigenvector for $A$ since

$$ Au=\begin{bmatrix}2 & 1 \\ 1 & 2\end{bmatrix}\begin{bmatrix}1 \\ 4\end{bmatrix}=\begin{bmatrix}6 \\ 9\end{bmatrix}\ne\lambda\begin{bmatrix}1 \\ 4\end{bmatrix} $$

for any value of $\lambda$. $\square$

5.1 Finding eigenvalues and eigenvectors

How can we find eigenvalues and eigenvectors of a given matrix? Suppose that $v\ne0$ and $Av=\lambda v.$ Since both $\lambda$ and $v$ are unknown, we cannot solve this linear system directly. But we can rewrite the equation as follows:

$$ \begin{aligned} Av-\lambda v &= 0 \\ (A-\lambda I)v &= 0 \end{aligned} $$

Since the last equation is a homogenous square system and $v$ is a non-trivial solution, the matrix $A-\lambda I$ must be non-invertible. Therefore

$$ \det(A-\lambda I)=0. $$

This is a polynomial equation in $\lambda$ and its roots are the eigenvalues of $A$. The advantage of this equation over the previous ones is that only $\lambda$ is unknown and $v$ does not appear. After finding the eigenvalues, we can go back to the linear system $(A-\lambda I)v=0$ and solve it in order to find the eigenvectors.

Example

Find the eigenvalues and the eigenvectors of the matrix

$$ A=\begin{bmatrix}1 & -2 \\ 3 & -4\end{bmatrix}. $$

Solution:

$$ \begin{aligned} \det(A-\lambda I) &= \begin{vmatrix}1-\lambda & -2 \\ 3 & -4-\lambda\end{vmatrix}=0 \\ (1-\lambda)(-4-\lambda)+6 &= 0 \\ \lambda^{2}+3\lambda+2 &= 0 \\ (\lambda+2)(\lambda+1) &= 0 \end{aligned} $$

Therefore the eigenvalues of $A$ are $\lambda_{1}=-2$ and $\lambda_{2}=-1$.

In order to find the eigenvectors for $\lambda_{1}$, solve the linear system $(A-\lambda_{1}I)v=0$:

$$ \begin{bmatrix}3 & -2 & | & 0 \\ 3 & -2 & | & 0\end{bmatrix} \xrightarrow{-R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}3 & -2 & | & 0 \\ 0 & 0 & | & 0\end{bmatrix} \xrightarrow{R_{1}/3\rightarrow R_{1}} \begin{bmatrix}1 & -2/3 & | & 0 \\ 0 & 0 & | & 0\end{bmatrix} $$

Set $v_{2}$ to be a free variable. Then the eigenvectors are $v=\begin{bmatrix}2v_{2}/3\\v_{2}\end{bmatrix}$ where $v_{2}\ne0$ is a real number.

The eigenvectors for $\lambda_{2}$ can be found by solving $(A-\lambda_{2}I)v=0$:

$$ \begin{bmatrix}2 & -2 & | & 0 \\ 3 & -3 & | & 0\end{bmatrix} \xrightarrow{R_{1}/2\rightarrow R_{1}} \begin{bmatrix}1 & -1 & | & 0 \\ 3 & -3 & | & 0\end{bmatrix} \xrightarrow{-3R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & -1 & | & 0 \\ 0 & 0 & | & 0\end{bmatrix} $$

Again, set $v_{2}$ to be free. Then the eigenvectors for $\lambda_{2}=-1$ are $v=\begin{bmatrix}v_{2}\\v_{2}\end{bmatrix}$ where $v_{2}\ne0$. $\square$

Example

Find the eigenvalues and eigenvectors of the matrix

$$ A=\begin{bmatrix}1 & 1 & 2 \\ 0 & 2 & 2 \\ -1 & 1 & 3\end{bmatrix}. $$

Solution:

$$ \begin{aligned} \det(A-\lambda I) &= \begin{vmatrix}1-\lambda & 1 & 2 \\ 0 & 2-\lambda & 2 \\ -1 & 1 & 3-\lambda\end{vmatrix}=0 \\ &= (1-\lambda)\begin{vmatrix}2-\lambda & 2 \\ 1 & 3-\lambda\end{vmatrix} - 1\begin{vmatrix}0 & 2 \\ -1 & 3-\lambda\end{vmatrix} + 2\begin{vmatrix}0 & 2-\lambda \\ -1 & 1\end{vmatrix}=0 \\ &= (1-\lambda)[(2-\lambda)(3-\lambda)-2]-1[0-(-2)]+2[0-(-(2-\lambda))]=0 \\ &= (1-\lambda)(\lambda^{2}-5\lambda+4)+(2-2\lambda)=0 \\ &= (1-\lambda)(\lambda^{2}-5\lambda+6)=0 \\ &= (1-\lambda)(\lambda-2)(\lambda-3)=0 \end{aligned} $$

Therefore $\lambda_{1}=1$, $\lambda_{2}=2$, $\lambda_{3}=3$.

Eigenvectors for $\lambda_{1}=1$ are solutions of $(A-I)v=0$:

$$ \begin{bmatrix}0 & 1 & 2 & | & 0 \\ 0 & 1 & 2 & | & 0 \\ -1 & 1 & 2 & | & 0\end{bmatrix} \xrightarrow{R_{1}\leftrightarrow R_{3}} \begin{bmatrix}-1 & 1 & 2 & | & 0 \\ 0 & 1 & 2 & | & 0 \\ 0 & 1 & 2 & | & 0\end{bmatrix} \xrightarrow{-R_{1}\rightarrow R_{1}} \begin{bmatrix}1 & -1 & -2 & | & 0 \\ 0 & 1 & 2 & | & 0 \\ 0 & 1 & 2 & | & 0\end{bmatrix} $$

$$ \xrightarrow{-R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & -1 & -2 & | & 0 \\ 0 & 1 & 2 & | & 0 \\ 0 & 0 & 0 & | & 0\end{bmatrix} \xrightarrow{R_{2}+R_{1}\rightarrow R_{1}} \begin{bmatrix}1 & 0 & 0 & | & 0 \\ 0 & 1 & 2 & | & 0 \\ 0 & 0 & 0 & | & 0\end{bmatrix} $$

(Although the last step is not necessary, it simplifies the set of equations.) Since column 3 does not have a leading 1, we set $v_{3}$ to be a free variable. Therefore, $v=\begin{bmatrix}0 & -2v_{3} & v_{3}\end{bmatrix}^{T}$ with $v_{3}\ne0$ are the eigenvectors for $\lambda_{1}=1$.

Eigenvectors for $\lambda_{2}=2$ are solutions of $(A-2I)v=0$:

$$ \begin{bmatrix}-1 & 1 & 2 & | & 0 \\ 0 & 0 & 2 & | & 0 \\ -1 & 1 & 1 & | & 0\end{bmatrix} \xrightarrow{-R_{1}\rightarrow R_{1}} \begin{bmatrix}1 & -1 & -2 & | & 0 \\ 0 & 0 & 2 & | & 0 \\ -1 & 1 & 1 & | & 0\end{bmatrix} \xrightarrow{R_{1}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & -1 & -2 & | & 0 \\ 0 & 0 & 2 & | & 0 \\ 0 & 0 & -1 & | & 0\end{bmatrix} $$

$$ \xrightarrow{R_{2}/2\rightarrow R_{2}} \begin{bmatrix}1 & -1 & -2 & | & 0 \\ 0 & 0 & 1 & | & 0 \\ 0 & 0 & -1 & | & 0\end{bmatrix} \xrightarrow{R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & -1 & -2 & | & 0 \\ 0 & 0 & 1 & | & 0 \\ 0 & 0 & 0 & | & 0\end{bmatrix} $$

This time, column 2 does not have a leading 1, therefore we set $v_{2}$ to be a free variable. The eigenvectors are $v=\begin{bmatrix}v_{2} & v_{2} & 0\end{bmatrix}^{T}$ with $v_{2}\ne0$.

Eigenvectors for $\lambda_{3}=3$ are solutions of $(A-3I)v=0$:

$$ \begin{bmatrix}-2 & 1 & 2 & | & 0 \\ 0 & -1 & 2 & | & 0 \\ -1 & 1 & 0 & | & 0\end{bmatrix} \xrightarrow{R_{1}\leftrightarrow R_{3}} \begin{bmatrix}-1 & 1 & 0 & | & 0 \\ 0 & -1 & 2 & | & 0 \\ -2 & 1 & 2 & | & 0\end{bmatrix} \xrightarrow{-R_{1}\rightarrow R_{1}} \begin{bmatrix}1 & -1 & 0 & | & 0 \\ 0 & -1 & 2 & | & 0 \\ -2 & 1 & 2 & | & 0\end{bmatrix} $$

$$ \xrightarrow{2R_{1}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & -1 & 0 & | & 0 \\ 0 & -1 & 2 & | & 0 \\ 0 & -1 & 2 & | & 0\end{bmatrix} \xrightarrow{-R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & -1 & 0 & | & 0 \\ 0 & 1 & -2 & | & 0 \\ 0 & -1 & 2 & | & 0\end{bmatrix} $$

$$ \xrightarrow{R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & -1 & 0 & | & 0 \\ 0 & 1 & -2 & | & 0 \\ 0 & 0 & 0 & | & 0\end{bmatrix} $$

The third column does not contain a leading 1, so we set $v_{3}$ to be a free variable. The eigenvectors for $\lambda_{3}=3$ are $v=\begin{bmatrix}2v_{3} & 2v_{3} & v_{3}\end{bmatrix}^{T}$ with $v_{3}\ne0$. $\square$

Example

Find all eigenvalues and eigenvectors of the matrix $\begin{bmatrix}1 & -3 \\ 5 & -3\end{bmatrix}.$

Solution:

$$ \begin{aligned} \det(A-\lambda I) &= \begin{vmatrix}1-\lambda & -3 \\ 5 & -3-\lambda\end{vmatrix}=0 \\ (1-\lambda)(-3-\lambda)+15 &= 0 \\ \lambda^{2}+2\lambda+12 &= 0 \\ (\lambda+1)^{2} &= -11 \\ \lambda_{1} &= -1+i\sqrt{11} \\ \lambda_{2} &= -1-i\sqrt{11} \end{aligned} $$

Notice that the eigenvalues are complex numbers. Also notice that $\lambda_{2}=\overline{\lambda_{1}}$. For a real matrix $A$, its complex eigenvalues must arise in conjugate pairs since

$$ Av=\lambda v \implies \overline{Av}=\overline{\lambda v} \implies A\overline{v}=\overline{\lambda}\overline{v} $$

Additionally, we obtain that the eigenvectors for $\overline{\lambda}$ are complex conjugates of the eigenvectors for $\lambda$. So it is enough to compute the eigenvectors of $\lambda_{1}=-1+i\sqrt{11}:$

$$ \begin{bmatrix}2-i\sqrt{11} & -3 & | & 0 \\ 5 & -2-i\sqrt{11} & | & 0\end{bmatrix} \xrightarrow{R_{1}/(2-i\sqrt{11})\rightarrow R_{1}} \begin{bmatrix}1 & -2/5-i\sqrt{11}/5 & | & 0 \\ 5 & -2-i\sqrt{11} & | & 0\end{bmatrix} $$

$$ \xrightarrow{-5R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & -2/5-i\sqrt{11}/5 & | & 0 \\ 0 & 0 & | & 0\end{bmatrix} $$

So we can set $v_{2}$ to be a free variable and the eigenvectors are $v=\begin{bmatrix}(2/5+i\sqrt{11}/5)v_{2}\\v_{2}\end{bmatrix}^{T}$ where $v_{2}\ne0$ is a complex number. There are no real eigenvectors in this case; any eigenvector must have a nonzero imaginary part. $\square$