ATTENTION / DISCLAIMER: This document was authored by Prof. Dr. Ali Özgür Kişisel (METU Mathematics Department), not by Selim Kaan Ozsoy. This .tex version was generated from the original PDF lecture notes using AI.

MATH 219 Spring 2025 Lecture 9 Lecture notes by Özgür Kişisel

Content: Basic theory of systems of first order linear equations. Homogeneous linear systems with constant coefficients.

Suggested Problems: (Boyce, Di Prima, 10th edition)

  • §7.4: 3, 7
  • §7.5: 9, 10, 14, 17, 31

After a quick discussion of linear independence, which is a linear algebra concept, in this lecture we will begin our investigation of systems of first order linear differential equations.


1. Linear independence

Definition 1.1

Suppose that $v_{1},…,v_{n}$ are vectors. A vector of the form $$ c_{1}v_{1}+c_{2}v_{2}+…+c_{n}v_{n} $$ where $c_{1},…,c_{n}$ are constants, is called a linear combination of $v_{1},…,v_{n}$.

Definition 1.2

A set of vectors ${v_{1},…,v_{n}}$ is said to be linearly independent if $$ c_{1}v_{1}+c_{2}v_{2}+…+c_{n}v_{n}=0 \implies c_{1}=c_{2}=…=c_{n}=0 $$ A set of vectors which is not linearly independent is called linearly dependent.

There is an alternative formulation of linear independence. Suppose that the set is linearly dependent. Then there exist constants $c_{1}…,c_{n}$, at least one of which is nonzero, such that $c_{1}v_{1}+c_{2}v_{2}+…+c_{n}v_{n}=0.$ Suppose that $c_{i}\ne0$. Then $$ \begin{aligned} c_{i}v_{i} &= -c_{1}v_{1}-…-c_{i-1}v_{i-1}-c_{i+1}v_{i+1}-…-c_{n}v_{n} \\ v_{i} &= -\frac{c_{1}}{c_{i}}v_{1}-…-\frac{c_{i-1}}{c_{i}}v_{i-1}-\frac{c_{i+1}}{c_{i}}v_{i+1}-…-\frac{c_{n}}{c_{i}}v_{n} \end{aligned} $$ Therefore, a set of vectors is linearly independent if and only if none of the vectors in the set can be expressed as a linear combination of the remaining vectors.

Example

Determine whether the set of vectors ${v_{1},v_{2},v_{3}}$ below is linearly independent or not: $$ v_{1}=\begin{bmatrix}1\\ 3\\ 6\end{bmatrix}, \quad v_{2}=\begin{bmatrix}3\\ 4\\ 5\end{bmatrix}, \quad v_{3}=\begin{bmatrix}5\\ 0\\ -9\end{bmatrix} $$

Solution: Suppose that $c_{1}v_{1}+c_{2}v_{2}+c_{3}v_{3}=0$. Then $$ \begin{aligned} c_{1}+3c_{2}+5c_{3} &= 0 \\ 3c_{1}+4c_{2} &= 0 \\ 6c_{1}+5c_{2}-9c_{3} &= 0 \end{aligned} $$ This is a linear system. Convert to matrix form and row reduce: $$ \begin{bmatrix}1 & 3 & 5 & \vert & 0 \\ 3 & 4 & 0 & \vert & 0 \\ 6 & 5 & -9 & \vert & 0\end{bmatrix} \xrightarrow[-6R_{1}+R_{3}\rightarrow R_{3}]{-3R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & 3 & 5 & \vert & 0 \\ 0 & -5 & -15 & \vert & 0 \\ 0 & -13 & -39 & \vert & 0\end{bmatrix} $$

$$ \xrightarrow{-R_{2}/5\rightarrow R_{2}} \begin{bmatrix}1 & 3 & 5 & \vert & 0 \\ 0 & 1 & 3 & \vert & 0 \\ 0 & -13 & -39 & \vert & 0\end{bmatrix} \xrightarrow{13R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & 3 & 5 & \vert & 0 \\ 0 & 1 & 3 & \vert & 0 \\ 0 & 0 & 0 & \vert & 0\end{bmatrix} $$ Hence $c_{3}$ is free. In particular there are solutions of the equation other than $c_{1}=c_{2}=c_{3}=0$. This means that ${v_{1},v_{2},v_{3}}$ is linearly dependent. $\square$

Definition 1.3

We say that a set of functions ${f_{1}(t),…,f_{n}(t)}$ is linearly independent if the equation $c_{1}f_{1}(t)+…+c_{n}f_{n}(t)=0$ for all $t$ implies $c_{1}=c_{2}=…=c_{n}=0$.

Example

Show that ${e^{t},e^{-t},1}$ is a linearly independent set of functions.

Solution: Suppose that $c_{1}e^{t}+c_{2}e^{-t}+c_{3}\cdot 1=0$ for all $t$. Choose some specific values of $t$ in order to get a $3\times 3$ linear system: $$ \begin{aligned} t=0 &\implies c_{1}+c_{2}+c_{3}=0 \\ t=1 &\implies c_{1}e+\frac{c_{2}}{e}+c_{3}=0 \\ t=-1 &\implies \frac{c_{1}}{e}+c_{2}e+c_{3}=0 \end{aligned} $$ Now let us write these equations in matrix form and row reduce. $$ \begin{bmatrix}1 & 1 & 1 & \vert & 0 \\ e & 1/e & 1 & \vert & 0 \\ 1/e & e & 1 & \vert & 0\end{bmatrix} \xrightarrow[-1/e R_{1}+R_{3}\rightarrow R_{3}]{-eR_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & 1 & 1 & \vert & 0 \\ 0 & 1/e-e & 1-e & \vert & 0 \\ 0 & e-1/e & 1-1/e & \vert & 0\end{bmatrix} $$

$$ \xrightarrow{R_{2}+R_{3}\rightarrow R_{3}} \begin{bmatrix}1 & 1 & 1 & \vert & 0 \\ 0 & 1/e-e & 1-e & \vert & 0 \\ 0 & 0 & 2-e-1/e & \vert & 0\end{bmatrix} $$ We can row reduce this system further but it is clear at this point that we will eventually get a leading 1 in each column. Therefore there are no free variables, and the only solution is $c_{1}=c_{2}=c_{3}=0$. Hence the given set is linearly independent. $\square$

There is another elegant way to test a set of functions for independence if the functions have derivatives upto a certain order. Suppose that $f_{1},…,f_{n}$ are functions of $t$ which are differentiable $n-1$ times. In order to test them for linear independence, we start from the equation $$ c_{1}f_{1}(t)+c_{2}f_{2}(t)+…+c_{n}f_{n}(t)=0 $$ which is assumed to hold for all $t$. We differentiate this equation with respect to $t$, $n-1$ times in order to get a linear system: $$ \begin{aligned} c_{1}f_{1}+c_{2}f_{2}+…+c_{n}f_{n} &= 0 \\ c_{1}f_{1}^{\prime}+c_{2}f_{2}^{\prime}+…+c_{n}f_{n}^{\prime} &= 0 \\ &\vdots \\ c_{1}f_{1}^{(n-1)}+c_{2}f_{2}^{(n-1)}+…+c_{n}f_{n}^{(n-1)} &= 0 \end{aligned} $$ We can write this linear system in matrix form $$ \begin{bmatrix}f_{1} & f_{2} & … & f_{n} \\ f_{1}^{\prime} & f_{2}^{\prime} & … & f_{n}^{\prime} \\ \vdots & \vdots & \ddots & \vdots \\ f_{1}^{(n-1)} & f_{2}^{(n-1)} & … & f_{n}^{(n-1)}\end{bmatrix} \begin{bmatrix}c_{1} \\ c_{2} \\ \vdots \\ c_{n}\end{bmatrix} = \begin{bmatrix}0 \\ 0 \\ \vdots \\ 0\end{bmatrix} $$ If the $n\times n$ matrix above is invertible then the only solution is $c_{1}=c_{2}=…=c_{n}=0$. Invertibility of this matrix can be tested by looking at its determinant. Therefore let us define $$ W(f_{1},f_{2},…,f_{n})=\begin{vmatrix}f_{1} & f_{2} & … & f_{n} \\ f_{1}^{\prime} & f_{2}^{\prime} & … & f_{n}^{\prime} \\ \vdots & \vdots & \ddots & \vdots \\ f_{1}^{(n-1)} & f_{2}^{(n-1)} & … & f_{n}^{(n-1)}\end{vmatrix} $$ This determinant is called the Wronskian of $f_{1},f_{2},…,f_{n}$ (as an ordered n-tuple). Notice that the Wronskian is a function of $t$ itself. The discussion above implies:

Theorem 1.1

If the Wronskian of a set of functions is not zero even for one value of $t$, then the set of functions is linearly independent.

Example

Show that ${e^{r_{1}t},e^{r_{2}t},e^{r_{3}t}}$ is independent if $r_{1}$, $r_{2}$, $r_{3}$ are distinct.

Solution: $$ \begin{aligned} W(e^{r_{1}t},e^{r_{2}t},e^{r_{3}t}) &= \begin{vmatrix}e^{r_{1}t} & e^{r_{2}t} & e^{r_{3}t} \\ r_{1}e^{r_{1}t} & r_{2}e^{r_{2}t} & r_{3}e^{r_{3}t} \\ r_{1}^{2}e^{r_{1}t} & r_{2}^{2}e^{r_{2}t} & r_{3}^{2}e^{r_{3}t}\end{vmatrix} \\ &= e^{r_{1}t}e^{r_{2}t}e^{r_{3}t}\begin{vmatrix}1 & 1 & 1 \\ r_{1} & r_{2} & r_{3} \\ r_{1}^{2} & r_{2}^{2} & r_{3}^{2}\end{vmatrix} \\ &= e^{(r_{1}+r_{2}+r_{3})t}\begin{vmatrix}1 & 0 & 0 \\ r_{1} & r_{2}-r_{1} & r_{3}-r_{1} \\ r_{1}^{2} & r_{2}^{2}-r_{1}^{2} & r_{3}^{2}-r_{1}^{2}\end{vmatrix} \\ &= e^{(r_{1}+r_{2}+r_{3})t}(r_{2}-r_{1})(r_{3}-r_{1})\begin{vmatrix}1 & 0 & 0 \\ r_{1} & 1 & 1 \\ r_{1}^{2} & r_{2}+r_{1} & r_{3}+r_{1}\end{vmatrix} \\ &= e^{(r_{1}+r_{2}+r_{3})t}(r_{2}-r_{1})(r_{3}-r_{1})\begin{vmatrix}1 & 0 & 0 \\ r_{1}^{2} & r_{2}+r_{1} & r_{3}-r_{2}\end{vmatrix} \\ &= e^{(r_{1}+r_{2}+r_{3})t}(r_{2}-r_{1})(r_{3}-r_{1})(r_{3}-r_{2})\begin{vmatrix}1 & 0 & 0 \\ r_{1} & 1 & 0 \\ r_{1}^{2} & r_{2}+r_{1} & 1\end{vmatrix} \\ &= e^{(r_{1}+r_{2}+r_{3})t}(r_{2}-r_{1})(r_{3}-r_{1})(r_{3}-r_{2}) \end{aligned} $$ Since $r_{1}$, $r_{2}$, $r_{3}$ are distinct, the Wronskian is not (actually never) zero. Therefore ${e^{r_{1}t},e^{r_{2}t},e^{r_{3}t}}$ is linearly independent. $\square$

Exercise: By suitably generalizing the method of the previous example, show that ${e^{r_{1}t},…,e^{r_{n}t}}$ is linearly independent if $r_{1},r_{2},…,r_{n}$ are distinct real or complex numbers.


2. Basic theory of first order linear systems

Let us first consider homogenous systems of first order linear ODE’s, namely, systems of the form: $$ \frac{dx}{dt}=A(t)x \qquad (1) $$

Theorem 2.1 (Principle of superposition)

Suppose that $x^{(1)},x^{(2)},…,x^{(n)}$ are solutions of (1). Then any linear combination $c_{1}x^{(1)}+c_{2}x^{(2)}+…+c_{n}x^{(n)}$ is also a solution where $c_{1},c_{2},…,c_{n}$ are constants.

Proof: Put $c_{1}x^{(1)}+…+c_{n}x^{(n)}$ in (1) and see if it works: $$ \begin{aligned} \frac{d}{dt}(c_{1}x^{(1)}+…+c_{n}x^{(n)}) &= c_{1}\frac{dx^{(1)}}{dt}+…+c_{n}\frac{dx^{(n)}}{dt} \\ &= c_{1}A(t)x^{(1)}+…+c_{n}A(t)x^{(n)} \\ &= A(t)(c_{1}x^{(1)}+…+c_{n}x^{(n)}) \end{aligned} $$ Therefore $c_{1}x^{(1)}+…+c_{n}x^{(n)}$ is a solution of (1). $\square$

We now want to construct a set of solutions of (1) so that any other solution is a linear combination of them. The solutions in this set will in a sense be basic building blocks for the space of all solutions. We want to do this in a way that we have just the necessary number of building blocks and no redundant ones. In order to construct these solutions we appeal to the existence-uniqueness theorem; since at this point the system is very general, it is impossible to write down an explicit solution, so we really need this theoretical tool.

Let $t_{0}$ be any point in the intersections of the domains of continuity of the entries of $A(t)$. Consider the initial value problem: $$ \frac{dx}{dt}=Ax, \quad x(t_{0}) = \begin{bmatrix}0 \\ \vdots \\ 1 \\ \vdots \\ 0\end{bmatrix} $$ where the only 1 in the vector above is at the $i$-th position. By the existence-uniqueness theorem, this initial value problem has a unique solution $x^{(i)}$.

Theorem 2.2

The set of vector functions ${x^{(1)},x^{(2)},…,x^{(n)}}$ is linearly independent.

Proof: Suppose that $c_{1}x^{(1)}+c_{2}x^{(2)}+…+c_{n}x^{(n)}=0$. Evaluate both sides at $t_{0}$. This immediately gives us $$ \begin{bmatrix}c_{1}\\ c_{2}\\ \vdots\\ c_{n}\end{bmatrix}=0 $$ So each $c_{i}$ is 0 and therefore, by definition, ${x^{(1)},x^{(2)},…,x^{(n)}}$ is linearly independent. $\square$

Theorem 2.3

Every solution of the system (1) can be written as a linear combination of $x^{(1)},x^{(2)},…,x^{(n)}$ (in a unique way).

Proof: Suppose that $x$ is an arbitrary solution of (1). Say $$ x(t_{0})=\begin{bmatrix}k_{1}\\ \vdots\\ k_{n}\end{bmatrix} $$ Then $k_{1}x^{(1)}+k_{2}x^{(2)}+…+k_{n}x^{(n)}$ and $x$ have the same value at $t_{0}$ and they are both solutions of (1). Therefore, by the existence-uniqueness theorem $$ x(t)=(k_{1}x^{(1)}+k_{2}x^{(2)}+…+k_{n}x^{(n)})(t) $$ for all $t$. The uniqueness of this representation follows from the linear independence of ${x^{(1)},x^{(2)},…,x^{(n)}}$: Indeed, suppose that $x$ can be written as a linear combination in two ways $$ x=k_{1}x^{(1)}+k_{2}x^{(2)}+…+k_{n}x^{(n)}=l_{1}x^{(1)}+l_{2}x^{(2)}+…+l_{n}x^{(n)} $$ Subtracting the two expressions on the right from one another, we get $$ (k_{1}-l_{1})x^{(1)}+(k_{2}-l_{2})x^{(2)}+…+(k_{n}-l_{n})x^{(n)}=0 $$ Since ${x^{(1)},x^{(2)},…,x^{(n)}}$ is linearly independent, this relation implies that $k_{i}-l_{i}=0$ for each $i$, therefore $k_{i}=l_{i}$. Hence these coefficients are uniquely determined. (An alternative way for this last step would be to look at the initial values at $t_{0}$ once again.) $\square$

Definition 2.1

A linearly independent set $\mathcal{B}$ of solutions of (1) such that every solution of (1) is expressible as a linear combination of elements of $\mathcal{B}$ is said to be a basis (or fundamental set) for the space of solutions.

The results above say that ${x^{(1)},x^{(2)},…,x^{(n)}}$ constructed in the way above is a basis for the space of solutions. This basis is not unique; solutions constructed in totally different ways could satisfy the conditions of being a fundamental set. We can use the following results from linear algebra to test whether or not a given set of solutions is a basis:

Theorem 2.4

  1. Any two bases for the same solution space have the same number of elements. In particular, if $A$ is an $n\times n$ matrix, then any basis for the solution space has $n$ elements.
  2. Any linearly independent set containing $n$ solutions is a basis.

These facts imply that for an $n\times n$ linear, homogenous system, it suffices to find $n$ linearly independent solutions. Then every other solution is a linear combination of these. In linear algebra jargon, one would summarize the results found above by saying that the solution set of an $n\times n$ first order linear homogeneous system of differential equations is a vector space of dimension $n$.


3. Constant coefficient systems

Consider the system $x^{\prime}=Ax$ where $A$ is a constant matrix. Let us look for solutions of this system of the form $x(t)=ve^{\lambda t}$ where $v$ is a constant vector. In order for this vector function to be a solution of the system, we must have $$ \begin{aligned} \frac{dx}{dt} &= \frac{d(ve^{\lambda t})}{dt} \\ &= \lambda ve^{\lambda t} \end{aligned} $$ On the other hand $x^{\prime}=Ax=Ave^{\lambda t}$. So, equivalent ways of writing the condition for $x$ to be a solution are $$ \begin{aligned} Ave^{\lambda t} &= \lambda ve^{\lambda t} \\ Av &= \lambda v \end{aligned} $$ This last equation holds if and only if $v$ is an eigenvector of $A$ with eigenvalue $\lambda$. This is good news, because we now have a mechanism for producing some solutions of $x^{\prime}=Ax$: Compute eigenvalues and eigenvectors of $A$. For each eigenvector-eigenvalue pair $(v, \lambda)$ we have a solution $x(t)=ve^{\lambda t}$.

But please beware that this analysis definitely doesn’t tell us that all solutions of the system are of this form. Instead, we will try to use the basic theory discussed in the last section in order to find all solutions of the system, by using these special solutions as building blocks.

Example

Solve the system $$ \begin{aligned} x_{1}^{\prime} &= 2x_{1}+x_{2} \\ x_{2}^{\prime} &= x_{1}+2x_{2} \end{aligned} $$

Solution: First, write the system in matrix form $$ \frac{d}{dt}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix} = \begin{bmatrix}2 & 1\\ 1 & 2\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix} \implies A=\begin{bmatrix}2 & 1\\ 1 & 2\end{bmatrix} $$ We start by finding the eigenvalues and eigenvectors of $A$: $$ \begin{aligned} \det(A-\lambda I) &= \begin{vmatrix}2-\lambda & 1\\ 1 & 2-\lambda\end{vmatrix} \\ &= (2-\lambda)^{2}-1 \\ &= (3-\lambda)(1-\lambda) \end{aligned} $$ Therefore the eigenvalues are $\lambda_{1}=3$ and $\lambda_{2}=1$.

Eigenvectors for $\lambda_{1}=3$ are solutions of the system $$ \begin{bmatrix}-1 & 1 & \vert & 0\\ 1 & -1 & \vert & 0\end{bmatrix} \xrightarrow{R_{1}\leftrightarrow R_{2}} \begin{bmatrix}1 & -1 & \vert & 0\\ -1 & 1 & \vert & 0\end{bmatrix} \xrightarrow{R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & -1 & \vert & 0\\ 0 & 0 & \vert & 0\end{bmatrix} $$ The eigenvectors for $\lambda_{1}=3$ are then $v=k\begin{bmatrix}1\\ 1\end{bmatrix}$ with $k\ne0$. For this pair, we can write the solution $x^{(1)}(t)=\begin{bmatrix}1\\ 1\end{bmatrix}e^{3t}.$

Next, let us look at eigenvectors for $\lambda_{2}=1$: $$ \begin{bmatrix}1 & 1 & \vert & 0\\ 1 & 1 & \vert & 0\end{bmatrix} \xrightarrow{-R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}1 & 1 & \vert & 0\\ 0 & 0 & \vert & 0\end{bmatrix} $$ Therefore the eigenvectors for $\lambda_{2}=1$ are $v=k\begin{bmatrix}-1\\ 1\end{bmatrix}.$ For this pair, we can write the solution $x^{(2)}(t)=\begin{bmatrix}-1\\ 1\end{bmatrix}e^{t}.$

Is the set ${x^{(1)},x^{(2)}}$ linearly independent? We can look at the Wronskian of these two functions: $$ W(x^{(1)},x^{(2)})=\begin{vmatrix}e^{3t} & -e^{t}\\ e^{3t} & e^{t}\end{vmatrix}=2e^{4t}\ne0 $$ Therefore the set is linearly independent. Now, by the basic theory discussed in the previous section, the system must have a fundamental set consisting of two linearly independent solutions. And we have already found two linearly independent solutions. This implies that all solutions of the system are $$ x=c_{1}\begin{bmatrix}e^{3t}\\ e^{3t}\end{bmatrix}+c_{2}\begin{bmatrix}-e^{t}\\ e^{t}\end{bmatrix} $$ with $c_{1}, c_{2}\in\mathbb{R}.$ $\square$

Example

Solve the system $$ x^{\prime}=\begin{bmatrix}1 & 1 & 2\\ 0 & 2 & 2\\ -1 & 1 & 3\end{bmatrix}x $$

Solution: The eigenvalues and eigenvectors of the coefficient matrix were computed in Lecture 7. They were: $$ \lambda_{1}=1, \quad v=k\begin{bmatrix}0\\ -2\\ 1\end{bmatrix}; \quad \lambda_{2}=2, \quad v=k\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}; \quad \lambda_{3}=3, \quad v=k\begin{bmatrix}2\\ 2\\ 1\end{bmatrix} $$ We can write three solutions associated to this data, by choosing one specific eigenvector for each of these three eigenvalues: $$ x^{(1)}=\begin{bmatrix}0\\ -2\\ 1\end{bmatrix}e^{t}, \quad x^{(2)}=\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}e^{2t}, \quad x^{(3)}=\begin{bmatrix}2\\ 2\\ 1\end{bmatrix}e^{3t} $$ Let us check the independence of these solutions: $$ \begin{aligned} W(x^{(1)},x^{(2)},x^{(3)}) &= \begin{vmatrix}0 & e^{2t} & 2e^{3t} \\ -2e^{t} & e^{2t} & 2e^{3t} \\ e^{t} & 0 & e^{3t}\end{vmatrix} \\ &= e^{t}e^{2t}e^{3t}\begin{vmatrix}0 & 1 & 2 \\ -2 & 1 & 2 \\ 1 & 0 & 1\end{vmatrix} \\ &= 2e^{6t}\ne0 \end{aligned} $$ Since we have a set of 3 linearly independent solutions for a $3\times3$ system, they must form a basis, again by the basic theory. Therefore all solutions of the system are $$ x=c_{1}\begin{bmatrix}0\\ -2e^{t}\\ e^{t}\end{bmatrix}+c_{2}\begin{bmatrix}e^{2t}\\ e^{2t}\\ 0\end{bmatrix}+c_{3}\begin{bmatrix}2e^{3t}\\ 2e^{3t}\\ e^{3t}\end{bmatrix} $$ where $c_{1}, c_{2}, c_{3}\in\mathbb{R}$. $\square$

Let us now consider the most general constant coefficient homogenous system $x^{\prime}=Ax$, where $A$ is a constant $n\times n$ matrix, and analyze the possibilities that could occur in the process above.

First, suppose that the matrix $A$ has $n$ linearly independent eigenvectors ${v^{(1)},v^{(2)},…,v^{(n)}}$ corresponding to eigenvalues $\lambda_{1},…,\lambda_{n}$ (the $\lambda_{i}$ need not be distinct). If we set $x^{(i)}=v^{(i)}e^{\lambda_{i}t}$ as usual, then $$ \begin{aligned} W(x^{(1)},x^{(2)},…,x^{(n)}) &= \begin{vmatrix}v^{(1)}e^{\lambda_{1}t} & v^{(2)}e^{\lambda_{2}t} & … & v^{(n)}e^{\lambda_{n}t}\end{vmatrix} \\ &= e^{(\lambda_{1}+…+\lambda_{n})t}\begin{vmatrix}v^{(1)} & v^{(2)} & … & v^{(n)}\end{vmatrix} \\ &\ne 0 \end{aligned} $$ since ${v^{(1)},v^{(2)},…,v^{(n)}}$ is a linearly independent set. This implies that all solutions of the system are $$ x=c_{1}v^{(1)}e^{\lambda_{1}t}+c_{2}v^{(2)}e^{\lambda_{2}t}+…+c_{n}v^{(n)}e^{\lambda_{n}t} $$

Remark 3.1

Not all $n\times n$ matrices have $n$ linearly independent eigenvectors. As a simple example, let $A=\begin{bmatrix}2 & 1\\ 0 & 2\end{bmatrix}$. Then $$ \det(A-\lambda I)=(2-\lambda)^{2} $$ Therefore $A$ has only one eigenvalue, $\lambda=2$. In order to find the corresponding eigenvectors, we solve $$ \begin{bmatrix}0 & 1 & \vert & 0\\ 0 & 0 & \vert & 0\end{bmatrix} $$ The coefficient matrix is already in row echelon form. The variable $v_{1}$ is free and $v_{2}=0$. Therefore, all eigenvectors are of the form $v=k\begin{bmatrix}1\\ 0\end{bmatrix}$. It is clear that we can choose at most one linearly independent eigenvector from this set. Any second eigenvector that is chosen would be a multiple of the first one. So, it is impossible to solve the system $x^{\prime}=Ax$ for this matrix $A$, or for other matrices lacking $n$ independent eigenvectors, by using only the ideas above. In order to complete the story for such systems, we will need some new ideas to be developed in the forthcoming lectures.

The following result gives us a sufficient condition for the existence of $n$ linearly independent eigenvectors of an $n\times n$ matrix $A$.

Theorem 3.1

If an $n\times n$ matrix $A$ has $n$ distinct eigenvalues $\lambda_{1},…,\lambda_{n}$, then the corresponding eigenvectors $v^{(1)},…,v^{(n)}$ are linearly independent.

Proof: We will prove the statement by contradiction. Contrary to the claim, suppose that ${v^{(1)},…,v^{(n)}}$ is not linearly independent. Suppose that $i$ is the least number such that $v^{(i)}$ is expressible as a linear combination of the previous elements of the set. Then we have a relation of the form $$ v^{(i)}=c_{1}v^{(1)}+c_{2}v^{(2)}+…+c_{i-1}v^{(i-1)} $$ We obtain two equations from this one: First by applying $A$ to both sides (and using $Av^{(j)}=\lambda_{j}v^{(j)}$), second by multiplying it by $\lambda_{i}$. $$ \begin{aligned} \lambda_{i}v^{(i)} &= c_{1}\lambda_{1}v^{(1)}+c_{2}\lambda_{2}v^{(2)}+…+c_{i-1}\lambda_{i-1}v^{(i-1)} \\ \lambda_{i}v^{(i)} &= c_{1}\lambda_{i}v^{(1)}+c_{2}\lambda_{i}v^{(2)}+…+c_{i-1}\lambda_{i}v^{(i-1)} \end{aligned} $$ Subtracting the second equation from the first one, we get $$ 0=c_{1}(\lambda_{1}-\lambda_{i})v^{(1)}+c_{2}(\lambda_{2}-\lambda_{i})v^{(2)}+…+c_{i-1}(\lambda_{i-1}-\lambda_{i})v^{(i-1)} $$ The set of vectors ${v^{(1)},v^{(2)},…,v^{(i-1)}}$ is linearly independent since we assumed that $i$ was the minimal number for which there is a linear dependence. Therefore $c_{j}(\lambda_{j}-\lambda_{i})=0$ for all $j\in{1,2,…,i-1}$. Since $\lambda_{j}\ne\lambda_{i},$ we get $c_{j}=0$ for each $j\le i-1$. This implies $v^{(i)}=0$ which is a contradiction. $\square$

Remark 3.2

The sufficient condition above is by no means necessary. In other words, it is possible that some of the eigenvalues are equal, yet there are $n$ linearly independent eigenvectors. A trivial example is when $A=kI$ for some $k$, where $I$ is the identity matrix. Then all eigenvalues are $k$, but any nonzero vector is an eigenvector. Hence there are $n$ independent eigenvectors in this case.