ATTENTION / DISCLAIMER: This document was authored by Prof. Dr. Ali Özgür Kişisel (METU Mathematics Department), not by Selim Kaan Ozsoy. This .tex version was generated from the original PDF lecture notes using AI.

MATH 219 Spring 2025 Lecture 11 Lecture notes by Özgür Kişisel

Content: Fundamental Matrices. Repeated eigenvalues.

Suggested Problems: (Boyce, Di Prima, 10th edition)

  • §7.7: 5, 7, 10, 11
  • §7.8: 3c, 4c, 6, 9, 18, 19, 20

1. Matrix exponentials, fundamental matrices

Let $A$ be a constant $n\times n$ matrix. Using the Taylor expansion of $e^{x}$ one can define:

$$ e^{At}=\sum_{k=0}^{\infty}\frac{(At)^{k}}{k!}. $$

It is a fact that the infinite series obtained for each matrix entry on the right hand side is convergent for any choice of constant real matrix $A$. Furthermore, one can apply term by term differentiation and get

$$ \frac{d}{dt}(e^{At})=\sum_{k=0}^{\infty}\frac{d}{dt}\left(\frac{(At)^{k}}{k!}\right)=\sum_{k=1}^{\infty}A\frac{(At)^{k-1}}{(k-1)!}=Ae^{At}. $$

Therefore each column vector of $e^{At}$ satisfies the homogenous linear system $\frac{dx}{dt}=Ax$ that we are trying to solve.

Lemma 1.1

Then column vectors of the matrix $e^{At}$ are linearly independent. (Equivalently, $e^{At}$ is an invertible matrix.)

Proof: Say $x^{(1)},…,x^{(n)}$ are the column vectors of $e^{At}$. Suppose that $c_{1}x^{(1)}+…+c_{n}x^{(n)}=0$. Note that this equation is equivalent to the matrix equation

$$ e^{At}\begin{bmatrix}c_{1}\\ c_{2}\\ \vdots\\ c_{n}\end{bmatrix}=0. $$

Put $t=0$. Then $e^{0t}=I$, therefore $c_{1}=…=c_{n}=0$. Therefore the columns of $e^{At}$ are linearly independent. $\square$

Theorem 1.1

Let $A$ be a constant $n\times n$ matrix. The columns of $e^{At}$ give us a basis for the solution space of

$$ \frac{dx}{dt}=Ax. $$

Proof: This is a direct consequence of the previous lemma and the basic theory discussed in lecture 9. $\square$

Caution: It is essential that $A$ is a constant matrix in order for the equation $\frac{d}{dt}(e^{At})=Ae^{At}$ to be correct. Exactly for this reason, if $A$ is not a constant matrix, then the exponential matrix $e^{At}$ becomes practically useless for our purposes.

Definition 1.1

Consider the homogenous linear system $\frac{dx}{dt}=A(t)x$ (where $A$ may or may not be a constant matrix this time). An $n\times n$ matrix $\Psi(t)$ is called a fundamental matrix for this system if

  1. $\frac{d\Psi}{dt}=A\Psi$
  2. $\Psi$ is invertible.

If $A$ is a constant matrix then the above discussion shows us that $\Phi(t)=e^{At}$ is a fundamental matrix for the system, with the additional property $\Phi(0)=I.$


2. Eigenvectors and $e^{At}$

If $A$ is a constant matrix then the discussion above showed us that finding $e^{At}$ will give us all solutions of the system $\frac{dx}{dt}=Ax.$ This is very enticing. But how can we compute this exponential matrix in practice? We should not exponentiate the entries one by one, since that will not give us the same result as the Taylor series which was the actual definition. A direct approach of computing the Taylor series is very cumbersome if the matrix has many non-zero entries. We need another systematic way to compute the exponential. We start from the easiest possible cases and progress towards the general case:

1. Diagonal case: Suppose that $A$ is a diagonal matrix

$$ A=\begin{bmatrix}d_{1} & 0 & … & 0 \\ 0 & d_{2} & … & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & … & d_{n}\end{bmatrix} $$

In this case the computation of $e^{At}$ is very easy, since multiplying diagonal matrices with each other is very easy:

$$ \begin{aligned} e^{At} &= \sum_{k=0}^{\infty}\frac{(At)^{k}}{k!} \\ &= \sum_{k=0}^{\infty}\frac{1}{k!}\begin{bmatrix}(d_{1}t)^{k} & 0 & … & 0 \\ 0 & (d_{2}t)^{k} & … & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & … & (d_{n}t)^{k}\end{bmatrix} \\ &= \begin{bmatrix}e^{d_{1}t} & 0 & … & 0 \\ 0 & e^{d_{2}t} & … & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & … & e^{d_{n}t}\end{bmatrix} \end{aligned} $$

Notice that it is hopeless to generalize this computation to non-diagonal matrices in the same way, since matrix multiplications at each step will become much more complicated in general.

2. Diagonalizable case: Next, we assume that the $n\times n$ matrix $A$ has $n$ linearly independent eigenvectors $v^{(1)},…,v^{(n)}$ with eigenvalues $\lambda_{1},…,\lambda_{n}$ respectively. Some of these eigenvalues may be equal. Then we have the following nice matrix equation:

$$ \begin{aligned} A\begin{bmatrix}v^{(1)} & \vert & … & \vert & v^{(n)}\end{bmatrix} &= \begin{bmatrix}\lambda_{1}v^{(1)} & \vert & … & \vert & \lambda_{n}v^{(n)}\end{bmatrix} \\ &= \begin{bmatrix}v^{(1)} & \vert & … & \vert & v^{(n)}\end{bmatrix}\begin{bmatrix}\lambda_{1} & 0 & … & 0 \\ 0 & \lambda_{2} & … & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & … & \lambda_{n}\end{bmatrix} \end{aligned} $$

So, if we say $P=\begin{bmatrix}v^{(1)} & \vert & … & \vert & v^{(n)}\end{bmatrix}$ and $D=\begin{bmatrix}\lambda_{1} & 0 & … & 0 \\ 0 & \lambda_{2} & … & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & … & \lambda_{n}\end{bmatrix}$, then this equation can be written in the form $AP=PD$. Since the eigenvectors are assumed to be linearly independent, the matrix $P$ is invertible and we have

$$ A=PDP^{-1} $$

In this case, we say that $A$ is diagonalizable. Using the last equation, we can compute $e^{At}$ by using some beautiful matrix algebra:

$$ \begin{aligned} A^{k} &= (PDP^{-1})(PDP^{-1})…(PDP^{-1}) \\ &= PDP^{-1}PDP^{-1}P…P^{-1}PDP^{-1} \\ &= PD^{k}P^{-1} \end{aligned} $$

Here, the associative property of matrix multiplication was used.

$$ \begin{aligned} e^{At} &= \sum_{k=0}^{\infty}\frac{(At)^{k}}{k!} \\ &= \sum_{k=0}^{\infty}\frac{PD^{k}P^{-1}}{k!}t^{k} \\ &= P\left(\sum_{k=0}^{\infty}\frac{D^{k}}{k!}t^{k}\right)P^{-1} \\ &= Pe^{Dt}P^{-1} \end{aligned} $$

Since $D$ is a diagonal matrix, the middle term in this product is easy to compute as in case 1. So this gives us a very manageable way to compute the matrix exponential.

Example

Let $A=\begin{bmatrix}4 & 1 \\ 1 & 4\end{bmatrix}$. Find $e^{At}$ and consequently find all solutions of the homogenous system $\frac{dx}{dt}=Ax.$

Solution: $$ \det(A-\lambda I)=\begin{vmatrix}4-\lambda & 1 \\ 1 & 4-\lambda\end{vmatrix}=(5-\lambda)(3-\lambda) $$ therefore the eigenvalues are $\lambda_{1}=3$ and $\lambda_{2}=5$.

Let us find the eigenvectors for $\lambda_{1}=3$: $$ \begin{bmatrix}1 & 1 & \vert & 0 \\ 1 & 1 & \vert & 0\end{bmatrix} \xrightarrow{-R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & 1 & \vert & 0 \\ 0 & 0 & \vert & 0\end{bmatrix} $$ so the eigenvectors for $\lambda_{1}=3$ are vectors of the form $k\begin{bmatrix}-1 \\ 1\end{bmatrix}$ with $k\ne 0.$

Next, let us find the eigenvectors for $\lambda_{2}=5$: $$ \begin{bmatrix}-1 & 1 & \vert & 0 \\ 1 & -1 & \vert & 0\end{bmatrix} \xrightarrow{R_{1}\leftrightarrow R_{2}} \begin{bmatrix}1 & -1 & \vert & 0 \\ -1 & 1 & \vert & 0\end{bmatrix} \xrightarrow{R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & -1 & \vert & 0 \\ 0 & 0 & \vert & 0\end{bmatrix} $$ so the eigenvectors for $\lambda_{2}=5$ are vectors of the form $k\begin{bmatrix}1 \\ 1\end{bmatrix}$ with $k\ne 0$.

Let us pick $v^{(1)}=\begin{bmatrix}-1 \\ 1\end{bmatrix}$ and $v^{(2)}=\begin{bmatrix}1 \\ 1\end{bmatrix}$. Then $$ P=\begin{bmatrix}-1 & 1 \\ 1 & 1\end{bmatrix}, \quad D=\begin{bmatrix}3 & 0 \\ 0 & 5\end{bmatrix} $$ We can compute $P^{-1}$ to be $$ P^{-1}=\frac{1}{2}\begin{bmatrix}-1 & 1 \\ 1 & 1\end{bmatrix} $$ Hence, $$ \begin{aligned} e^{At} &= Pe^{Dt}P^{-1} \\ &= \begin{bmatrix}-1 & 1 \\ 1 & 1\end{bmatrix}\begin{bmatrix}e^{3t} & 0 \\ 0 & e^{5t}\end{bmatrix}\frac{1}{2}\begin{bmatrix}-1 & 1 \\ 1 & 1\end{bmatrix} \\ &= \frac{1}{2}\begin{bmatrix}e^{3t}+e^{5t} & -e^{3t}+e^{5t} \\ -e^{3t}+e^{5t} & e^{3t}+e^{5t}\end{bmatrix} \end{aligned} $$ The columns of $e^{At}$ are linearly independent solutions of the system. We deduce that all solutions of the system are: $$ x=c_{1}\begin{bmatrix}e^{3t}+e^{5t} \\ -e^{3t}+e^{5t}\end{bmatrix}+c_{2}\begin{bmatrix}-e^{3t}+e^{5t} \\ e^{3t}+e^{5t}\end{bmatrix} $$ (forgetting the $1/2$ was not a big sin, since they can be swallowed by the constants $c_{1}$ and $c_{2}$.) $\square$

3. Jordan form: Suppose now that the $n\times n$ matrix $A$ does not have $n$ linearly independent eigenvectors. Therefore this case will cover all the remaining possibilities. We then say that the matrix $A$ is not diagonalizable, since it will be impossible to write $A$ in the form $PDP^{-1}$ for a diagonal matrix $D$. It is natural to look for the “next best” alternative to being diagonalizable.

Definition 2.1

A matrix $J_{i}$ is said to be a Jordan block if it has the following form

$$ J_{i}=\begin{bmatrix}\lambda_{i} & 1 & 0 & … & 0 \\ 0 & \lambda_{i} & 1 & … & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & … & 0 & \lambda_{i} & 1 \\ 0 & … & 0 & 0 & \lambda_{i}\end{bmatrix} $$

for some real or complex value of $\lambda_{i}$ (here, the diagonal entries are all equal to the same number $\lambda_{i}$, entries immediately above the diagonal are 1, all other entries are 0).

A matrix $J$ is said to be in Jordan form if it can be written in the block form

$$ J=\begin{bmatrix}J_{1} & 0 & … & 0 \\ 0 & J_{2} & … & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & … & J_{k}\end{bmatrix} $$

where $J_{1},…,J_{k}$ are Jordan blocks of various sizes and $0$’s denote zero matrices of appropriate sizes.

The following theorem is a classical result in linear algebra whose proof is beyond the scope of this course:

Theorem 2.1

For every $n\times n$ real (or complex) matrix $A$ there exists an invertible matrix $P$ (possibly with complex entries, even when $A$ is real) such that $P^{-1}AP$ is in Jordan form.

Assuming the existence of the matrix in Jordan form guaranteed by the theorem above, we will concentrate on how one can find $J$ and $P$ for a given matrix $A$. Let us first think about a single Jordan block in relation to the vectors first. Suppose that

$$ J=\begin{bmatrix}\lambda & 1 & 0 & … & 0 \\ 0 & \lambda & 1 & … & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & … & 0 & \lambda & 1 \\ 0 & … & 0 & 0 & \lambda\end{bmatrix} $$

is a Jordan block and $P=\begin{bmatrix}v^{(1)} & \vert & v^{(2)} & \vert & … & \vert & v^{(n)}\end{bmatrix}$. Then the equation $AP=PJ$ can also be written as follows:

$$ \begin{aligned} A\begin{bmatrix}v^{(1)} & \vert & … & \vert & v^{(n)}\end{bmatrix} &= \begin{bmatrix}v^{(1)} & \vert & … & \vert & v^{(n)}\end{bmatrix}J \\ &= \begin{bmatrix}\lambda v^{(1)} & \vert & v^{(1)}+\lambda v^{(2)} & \vert & v^{(2)}+\lambda v^{(3)} & \vert & … & \vert & v^{(n-1)}+\lambda v^{(n)}\end{bmatrix} \end{aligned} $$

Therefore the column vectors of $P$ must obey the equations

$$ \begin{aligned} Av^{(1)} &= \lambda v^{(1)} \\ Av^{(2)} &= v^{(1)}+\lambda v^{(2)} \\ &\vdots \\ Av^{(n)} &= v^{(n-1)}+\lambda v^{(n)} \end{aligned} $$

or equivalently

$$ \begin{aligned} (A-\lambda I)v^{(1)} &= 0 \\ (A-\lambda I)v^{(2)} &= v^{(1)} \\ (A-\lambda I)v^{(3)} &= v^{(2)} \\ &\vdots \\ (A-\lambda I)v^{(n)} &= v^{(n-1)} \end{aligned} $$

This tells us that $v^{(1)}$ is an honest eigenvector but the other columns are not eigenvectors. They satisfy equations similar to the eigenvector equation. They are sometimes called generalized eigenvectors. Also note that $(A-\lambda I)^{k}v^{(k)}=0$, therefore these vectors are “killed” by higher and higher powers of the matrix $A-\lambda I.$

Therefore, if we somehow knew that the Jordan form of $A$ has a single Jordan block, the strategy to find it would be clear: Find $v^{(1)}$ using the eigenvector equation. Then subsequently solve for $v^{(2)}, v^{(3)},…,v^{(n)}$ using the remaining equations.

Example

Find the Jordan form of the matrix $A=\begin{bmatrix}1 & 1 & 1 \\ 2 & 1 & -1 \\ -3 & 2 & 4\end{bmatrix}$. Write $A=PJP^{-1}$.

Solution: $$ \begin{aligned} \det(A-\lambda I) &= \begin{vmatrix}1-\lambda & 1 & 1 \\ 2 & 1-\lambda & -1 \\ -3 & 2 & 4-\lambda\end{vmatrix} \\ &= (1-\lambda)\begin{vmatrix}1-\lambda & -1 \\ 2 & 4-\lambda\end{vmatrix}-1\begin{vmatrix}2 & -1 \\ -3 & 4-\lambda\end{vmatrix}+1\begin{vmatrix}2 & 1-\lambda \\ -3 & 2\end{vmatrix} \\ &= (1-\lambda)(\lambda^{2}-5\lambda+6)-(5-2\lambda)+(7-3\lambda) \\ &= -\lambda^{3}+6\lambda^{2}-12\lambda+8 \\ &= (2-\lambda)^{3} \end{aligned} $$ therefore the only eigenvalue is $\lambda=2$. Let us find the eigenvectors: $$ \begin{bmatrix}-1 & 1 & 1 & \vert & 0 \\ 2 & -1 & -1 & \vert & 0 \\ -3 & 2 & 2 & \vert & 0\end{bmatrix} \xrightarrow[{-3R_{1}+R_{3}\rightarrow R_{3}}]{2R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}-1 & 1 & 1 & \vert & 0 \\ 0 & 1 & 1 & \vert & 0 \\ 0 & -1 & -1 & \vert & 0\end{bmatrix} $$ $$ \xrightarrow{R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}-1 & 1 & 1 & \vert & 0 \\ 0 & 1 & 1 & \vert & 0 \\ 0 & 0 & 0 & \vert & 0\end{bmatrix} $$ Therefore the eigenvectors are $v=k\begin{bmatrix}0 \\ -1 \\ 1\end{bmatrix}$. We can only pick one independent eigenvector. Each Jordan block in the Jordan form would give us one independent eigenvector, therefore there must be only one Jordan block in this example.

Therefore $$ J=\begin{bmatrix}2 & 1 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2\end{bmatrix} $$ Set $v^{(1)}=\begin{bmatrix}0 \\ -1 \\ 1\end{bmatrix}.$ Solve for $v^{(2)}$ using $(A-2I)v^{(2)}=v^{(1)}$: $$ \begin{bmatrix}-1 & 1 & 1 & \vert & 0 \\ 2 & -1 & -1 & \vert & -1 \\ -3 & 2 & 2 & \vert & 1\end{bmatrix} \xrightarrow[{-3R_{1}+R_{3}\rightarrow R_{3}}]{2R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}-1 & 1 & 1 & \vert & 0 \\ 0 & 1 & 1 & \vert & -1 \\ 0 & -1 & -1 & \vert & 1\end{bmatrix} $$ $$ \xrightarrow{R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}-1 & 1 & 1 & \vert & 0 \\ 0 & 1 & 1 & \vert & -1 \\ 0 & 0 & 0 & \vert & 0\end{bmatrix} $$ Therefore the solutions are of the form $\begin{bmatrix}-1 \\ -1-k \\ k\end{bmatrix}$. We are free to choose $v^{(2)}$ as any member of this set. For instance, choose $v^{(2)}=\begin{bmatrix}-1 \\ -1 \\ 0\end{bmatrix}.$

Next, solve for $v^{(3)}$ using $(A-2I)v^{(3)}=v^{(2)}$: $$ \begin{bmatrix}-1 & 1 & 1 & \vert & -1 \\ 2 & -1 & -1 & \vert & -1 \\ -3 & 2 & 2 & \vert & 0\end{bmatrix} \xrightarrow[{-3R_{1}+R_{3}\rightarrow R_{3}}]{2R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}-1 & 1 & 1 & \vert & -1 \\ 0 & 1 & 1 & \vert & -3 \\ 0 & -1 & -1 & \vert & 3\end{bmatrix} $$ $$ \xrightarrow{R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}-1 & 1 & 1 & \vert & -1 \\ 0 & 1 & 1 & \vert & -3 \\ 0 & 0 & 0 & \vert & 0\end{bmatrix} $$ The solutions are $\begin{bmatrix}-2 \\ -3-k \\ k\end{bmatrix}$. Again, we are free to make a choice here, pick for instance $v^{(3)}=\begin{bmatrix}-2 \\ -3 \\ 0\end{bmatrix}$.

Therefore for the matrix $P=\begin{bmatrix}0 & -1 & -2 \\ -1 & -1 & -3 \\ 1 & 0 & 0\end{bmatrix}$ and for the Jordan matrix $J$ above we will have $A=PJP^{-1}$. $\square$


3. Exponentiating a matrix in Jordan form

Suppose now that $A=PJP^{-1}$ where $J$ is a matrix in Jordan form. Just like in the diagonalizable case, we can use this equality in order to compute $e^{At}.$ First of all, we have:

$$ \begin{aligned} e^{At} &= \sum_{k=0}^{\infty}\frac{(At)^{k}}{k!} \\ &= \sum_{k=0}^{\infty}\frac{(PJP^{-1}t)^{k}}{k!} \\ &= \sum_{k=0}^{\infty}\frac{P(Jt)^{k}P^{-1}}{k!} \\ &= Pe^{Jt}P^{-1}. \end{aligned} $$

So, it will be enough to determine $e^{Jt}$. It is enough to consider the case where $J$ is a single Jordan block, since different Jordan blocks will not interact with each other in matrix exponentiation.

Theorem 3.1

If $J=\begin{bmatrix}\lambda & 1 & 0 & … & 0 \\ 0 & \lambda & 1 & … & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & … & 0 & \lambda & 1 \\ 0 & … & 0 & 0 & \lambda\end{bmatrix}$ is an $n\times n$ Jordan block then

$$ e^{Jt}=e^{\lambda t}\begin{bmatrix}1 & t & \frac{t^{2}}{2!} & … & \frac{t^{n-1}}{(n-1)!} \\ 0 & 1 & t & … & \frac{t^{n-2}}{(n-2)!} \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & … & … & 1 & t \\ 0 & … & … & 0 & 1\end{bmatrix} $$

Proof: (This proof could be skipped at the first reading. Using the statement correctly will be essential for what follows, however.) Let us first compute powers of $J$. We assert that

$$ J^{k}=\begin{bmatrix}\lambda^{k} & \binom{k}{1}\lambda^{k-1} & \binom{k}{2}\lambda^{k-2} & … & \binom{k}{k}\lambda^{k-n+1} \\ 0 & \lambda^{k} & \binom{k}{1}\lambda^{k-1} & … & \binom{k}{k-1}\lambda^{k-n+2} \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & … & 0 & \lambda^{k} & \binom{k}{1}\lambda^{k-1} \\ 0 & … & 0 & 0 & \lambda^{k}\end{bmatrix} $$

where $\binom{k}{i}=\frac{k!}{(k-i)!i!}$ is the binomial coefficient $(k, i)$. This can be proven by induction using the identity $\binom{k}{i-1}+\binom{k}{i}=\binom{k+1}{i}$. The $ij$-entry of such a matrix depends only on $j-i$ hence it is enough to look at the first row.

The $1j$-entry of $e^{Jt}=\sum_{n=0}^{\infty}\frac{(Jt)^{k}}{k!}$ is

$$ \sum_{k=0}^{\infty}\binom{k}{j-1}\frac{\lambda^{k+1-j}t^{k}}{k!}=\frac{t^{j-1}}{(j-1)!}\sum_{k=0}^{\infty}\frac{\lambda^{k+1-j}t^{k+1-j}}{(k+1-j)!}=\frac{t^{j-1}}{(j-1)!}e^{\lambda t} $$

and this proves the claim. (Note: For $k<n$ we assume that $\binom{k}{n}=0$) $\square$

Summarizing, provided that we can find the Jordan form of the matrix $A$, we can use the formulas above in order to compute $e^{At}=Pe^{Jt}P^{-1}.$ In particular, the columns of $e^{At}$ will give us $n$ linearly independent solutions of $x^{\prime}=Ax$. Linear combinations of these columns can be written as $e^{At}c$ where $c$ is a column vector of constants.

There is a slightly more economical way to find the solutions of the system of differential equations: $e^{At}c=Pe^{Jt}P^{-1}c.$ Since $P^{-1}$ will be a constant matrix, we can rename $P^{-1}c$ as $c$. Then the solution takes the form $\Psi(t)c$ where $\Psi(t)=Pe^{Jt}$. Then we do not have to go through the extra computation for finding $P^{-1}$.

Example

Solve the homogenous system

$$ x^{\prime}=\begin{bmatrix}1 & 1 & 1 \\ 2 & 1 & -1 \\ -3 & 2 & 4\end{bmatrix}x $$

Solution: The coefficient matrix $A$ above is the same as the one in an example from the previous lecture. There we found $A=PJP^{-1}$ namely

$$ A=P\begin{bmatrix}2 & 1 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2\end{bmatrix}P^{-1} $$

where $P=\begin{bmatrix}0 & -1 & -2 \\ -1 & -1 & -3 \\ 1 & 0 & 0\end{bmatrix}$. Therefore,

$$ \begin{aligned} x &= \Psi(t)c=Pe^{Jt}c \\ &= \begin{bmatrix}0 & -1 & -2 \\ -1 & -1 & -3 \\ 1 & 0 & 0\end{bmatrix}e^{2t}\begin{bmatrix}1 & t & t^{2}/2 \\ 0 & 1 & t \\ 0 & 0 & 1\end{bmatrix}c \\ &= e^{2t}\begin{bmatrix}0 & -1 & -t-2 \\ -1 & -t-1 & -t^{2}/2-t-3 \\ 1 & t & t^{2}/2\end{bmatrix}\begin{bmatrix}c_{1} \\ c_{2} \\ c_{3}\end{bmatrix} \\ &= c_{1}\begin{bmatrix}0 \\ -e^{2t} \\ e^{2t}\end{bmatrix}+c_{2}\begin{bmatrix}-e^{2t} \\ (-t-1)e^{2t} \\ te^{2t}\end{bmatrix}+c_{3}\begin{bmatrix}(-t-2)e^{2t} \\ (-t^{2}/2-t-3)e^{2t} \\ t^{2}e^{2t}/2\end{bmatrix} \end{aligned} $$

where $c_{1}, c_{2}, c_{3}$ are arbitrary real numbers. $\square$

Example

Solve the system $x^{\prime}=\begin{bmatrix}7 & 0 & 1 \\ 0 & 6 & 0 \\ -1 & 0 & 5\end{bmatrix}x$

Solution: $$ \det(A-\lambda I)=\begin{vmatrix}7-\lambda & 0 & 1 \\ 0 & 6-\lambda & 0 \\ -1 & 0 & 5-\lambda\end{vmatrix}=(6-\lambda)^{3} $$

Therefore the only eigenvalue is $\lambda=6$. Let us find the eigenvectors.

$$ \begin{bmatrix}1 & 0 & 1 & \vert & 0 \\ 0 & 0 & 0 & \vert & 0 \\ -1 & 0 & -1 & \vert & 0\end{bmatrix} \xrightarrow{R_{1}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & 0 & 1 & \vert & 0 \\ 0 & 0 & 0 & \vert & 0 \\ 0 & 0 & 0 & \vert & 0\end{bmatrix} $$

There is only a single leading 1, hence there are two free variables. This implies that we can choose two independent eigenvectors $v^{(1)}, v^{(2)}.$ Hence there will be one generalized eigenvector $v^{(3)}$ with $(A-6I)v^{(3)}=v^{(2)}$ and the Jordan form has two blocks:

$$ J=\begin{bmatrix}6 & 0 & 0 \\ 0 & 6 & 1 \\ 0 & 0 & 6\end{bmatrix} $$

How can we find $v^{(1)}, v^{(2)}, v^{(3)}?$ It might be tricky to start from the eigenvectors, since it is unclear which eigenvector $v^{(2)}$ will lead to a generalized eigenvector $v^{(3)}$. Instead, start by choosing $v^{(3)}$. It can be taken to be any vector such that

$$ \begin{aligned} (A-6I)^{2}v^{(3)} &= 0 \\ (A-6I)v^{(3)} &\ne 0 \end{aligned} $$

(the second inequality guarantees that $v^{(2)}$ is nontrivial). In this example $(A-6I)^{2}=0$, therefore the first condition is void. We can just choose any vector which is not an eigenvector. For instance, let us choose $v^{(3)}=\begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}$. Then

$$ v^{(2)}=(A-6I)v^{(3)}=\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix}. $$

Finally, $v^{(1)}$ should be any eigenvector which is independent of $v^{(2)}$, for instance $v^{(1)}=\begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix}.$ We found

$$ P=\begin{bmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & -1 & 1\end{bmatrix} $$

Now, we can find the solutions of the system:

$$ \begin{aligned} x &= \Psi(t)c=Pe^{Jt}c \\ &= e^{6t}\begin{bmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & -1 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & t \\ 0 & 0 & 1\end{bmatrix}c \\ &= e^{6t}\begin{bmatrix}0 & 1 & t \\ 1 & 0 & 0 \\ 0 & -1 & -t+1\end{bmatrix}\begin{bmatrix}c_{1} \\ c_{2} \\ c_{3}\end{bmatrix} \\ &= c_{1}\begin{bmatrix}0 \\ e^{6t} \\ 0\end{bmatrix}+c_{2}\begin{bmatrix}e^{6t} \\ 0 \\ -e^{6t}\end{bmatrix}+c_{3}\begin{bmatrix}te^{6t} \\ 0 \\ (-t+1)e^{6t}\end{bmatrix} \end{aligned} $$

where $c_{1}, c_{2}, c_{3}$ are arbitrary real numbers. $\square$