ATTENTION / DISCLAIMER: This document was authored by Prof. Dr. Ali Özgür Kişisel (METU Mathematics Department), not by Selim Kaan Ozsoy. This .tex version was generated from the original PDF lecture notes using AI.

MATH 219 Spring 2025 Lecture 12 Lecture notes by Özgür Kişisel

Content: Nonhomogenous linear systems (variation of parameters only).

Suggested Problems: (Boyce, Di Prima, 10th edition)

  • §7.9: 2, 6, 7, 9, 11, 13

1. Variation of parameters

In the previous lecture, we outlined a method to solve any constant coefficient homogenous linear system. Suppose now that we have a nonhomogenous linear system:

$$ x^{\prime}=A(t)x+b $$

Recall that a fundamental matrix $\Psi(t)$ is any matrix satisfying

$$ \begin{aligned} \frac{d\Psi}{dt} &= A\Psi \\ \det(\Psi) &\ne 0 \end{aligned} $$

Provided that we can find such a matrix $\Psi(t)$, we can write down all solutions of the homogenous system $x^{\prime}=Ax$ as

$$ x=\Psi(t)c $$

where $c$ is a vector of constants. In particular if $A$ is a constant matrix, then $e^{At}$ or $Pe^{Jt}$ that were found in the previous lecture are fundamental matrices.

We will use a method called variation of parameters in order to solve the nonhomogenous system. The idea of variation of parameters is to replace the constant vector $c$ in the formula $x=\Psi(t)c$ by a nonconstant vector $v(t)$ and hope that we can extract a solution of the nonhomogenous system of the form $\Psi(t)v(t)$. In fact,

Theorem 1.1

All solutions of $x^{\prime}=Ax+b$ are of the form $x=\Psi(t)v(t)$ where $v(t)=\int\Psi^{-1}(t)b(t)dt$.

Proof: Plug $x=\Psi(t)v$ into the differential equation and use product rule to differentiate:

$$ \begin{aligned} x^{\prime} &= \frac{d\Psi}{dt}v+\Psi\frac{dv}{dt} \\ &= A\Psi v+\Psi\frac{dv}{dt} \end{aligned} $$

We want the right hand side of this equation to be equal to $Ax+b$ namely to $A\Psi v+b$. This equality holds if and only if

$$ \begin{aligned} \Psi\frac{dv}{dt} &= b \\ \frac{dv}{dt} &= \Psi^{-1}b \\ v &= \int\Psi^{-1}bdt \end{aligned} $$

Therefore the expression $x=\Psi\int\Psi^{-1}bdt$ in the statement is really a solution. How can we be sure that there are no other solutions? We can write the indefinite integral above as $\int=\int_{0}^{t}+c$ where $c$ is a vector of constants. Then the solutions obtained above are of the form $x=\Psi c+\Psi\int_{0}^{t}\Psi^{-1}(\tau)b(\tau)d\tau.$ Then

$$ x_{p}=\Psi\int_{0}^{t}\Psi^{-1}(\tau)b(\tau)d\tau $$

is a particular solution of the nonhomogenous system. If $x$ is any other solution, then by the principle of superposition $x-x_{p}$ must be a solution of the corresponding homogenous system, therefore it must be of the form $\Psi c$. This proves the claim. $\square$

Example

Solve the system $x^{\prime}=\begin{bmatrix}2 & 3 \\ 0 & 1\end{bmatrix}x+\begin{bmatrix}e^{2t} \\ t\end{bmatrix}$

Solution:

$$ \det(A-\lambda I)=\begin{vmatrix}2-\lambda & 3 \\ 0 & 1-\lambda\end{vmatrix}=(2-\lambda)(1-\lambda). $$

Therefore the eigenvalues are $\lambda_{1}=2$ and $\lambda_{2}=1$. Let us find the eigenvectors for $\lambda_{1}$:

$$ \begin{bmatrix}0 & 3 & \vert & 0 \\ 0 & -1 & \vert & 0\end{bmatrix} \xrightarrow{R_{1}/3\rightarrow R_{1}} \begin{bmatrix}0 & 1 & \vert & 0 \\ 0 & -1 & \vert & 0\end{bmatrix} \xrightarrow{R_{1}+R_{2}\rightarrow R_{2}} \begin{bmatrix}0 & 1 & \vert & 0 \\ 0 & 0 & \vert & 0\end{bmatrix} $$

So the eigenvectors are of the form $k\begin{bmatrix}1 \\ 0\end{bmatrix}$.

Next, let us find the eigenvectors for $\lambda_{2}$. The matrix $\begin{bmatrix}1 & 3 & \vert & 0 \\ 0 & 0 & \vert & 0\end{bmatrix}$ is already in row echelon form. So the eigenvectors are of the form $k\begin{bmatrix}-3 \\ 1\end{bmatrix}$.

Therefore we can write down two linearly independent solutions $x^{(1)}=\begin{bmatrix}e^{2t} \\ 0\end{bmatrix}$ and $x^{(2)}=\begin{bmatrix}-3e^{t} \\ e^{t}\end{bmatrix}$. So a fundamental matrix is

$$ \Psi(t)=\begin{bmatrix}e^{2t} & -3e^{t} \\ 0 & e^{t}\end{bmatrix} $$

Its inverse can be easily computed to be $\Psi^{-1}=\begin{bmatrix}e^{-2t} & 3e^{-2t} \\ 0 & e^{-t}\end{bmatrix}.$

Now use the formula $v=\int\Psi^{-1}bdt$:

$$ \begin{aligned} v &= \int\begin{bmatrix}e^{-2t} & 3e^{-2t} \\ 0 & e^{-t}\end{bmatrix}\begin{bmatrix}e^{2t} \\ t\end{bmatrix}dt \\ &= \begin{bmatrix}\int(1+3te^{-2t})dt \\ \int te^{-t}dt\end{bmatrix} \\ &= \begin{bmatrix}t-\frac{3}{2}te^{-2t}-\frac{3}{4}e^{-2t}+c_{1} \\ -te^{-t}-e^{-t}+c_{2}\end{bmatrix} \end{aligned} $$

(The integrals above can be found by employing integration by parts.) Finally we can find the general solution for $x$:

$$ \begin{aligned} x &= \Psi v \\ &= \begin{bmatrix}e^{2t} & -3e^{t} \\ 0 & e^{t}\end{bmatrix}\left(\begin{bmatrix}t-\frac{3}{2}te^{-2t}-\frac{3}{4}e^{-2t} \\ -te^{-t}-e^{-t}\end{bmatrix}+\begin{bmatrix}c_{1} \\ c_{2}\end{bmatrix}\right) \\ &= \begin{bmatrix}e^{2t} & -3e^{t} \\ 0 & e^{t}\end{bmatrix}\begin{bmatrix}t-\frac{3}{2}te^{-2t}-\frac{3}{4}e^{-2t} \\ -te^{-t}-e^{-t}\end{bmatrix}+\begin{bmatrix}e^{2t} & -3e^{t} \\ 0 & e^{t}\end{bmatrix}\begin{bmatrix}c_{1} \\ c_{2}\end{bmatrix} \\ &= \begin{bmatrix}te^{2t}+\frac{3}{2}t+\frac{9}{4} \\ -t-1\end{bmatrix}+c_{1}\begin{bmatrix}e^{2t} \\ 0\end{bmatrix}+c_{2}\begin{bmatrix}-3e^{t} \\ e^{t}\end{bmatrix} \end{aligned} $$

where $c_{1}, c_{2}\in\mathbb{R}$ are arbitrary constants. $\square$

Example

Consider the system

$$ \begin{bmatrix}x_{1} \\ x_{2}\end{bmatrix}^{\prime}=\begin{bmatrix}a & b \\ c & d\end{bmatrix}\begin{bmatrix}x_{1} \\ x_{2}\end{bmatrix}+\begin{bmatrix}k_{1} \\ k_{2}\end{bmatrix} $$

where $a, b, c, d, k_{1}, k_{2}$ are constants. Suppose that the coefficient matrix $A=\begin{bmatrix}a & b \\ c & d\end{bmatrix}$ has two distinct negative real eigenvalues. Show that the limits $\lim_{t\rightarrow+\infty}x_{1}(t)$ and $\lim_{t\rightarrow+\infty}x_{2}(t)$ exist and do not depend on the initial values of $x_{1}$ and $x_{2}$. Compute these limits in terms of $A, k_{1}$ and $k_{2}$.

Solution: Let the eigenvalues of $A$ be $\lambda_{1}$ and $\lambda_{2}$. Since they are not equal to each other, the matrix $A$ must be diagonalizable. So there exists an invertible matrix $P$ (which we will not attempt to compute) such that

$$ A=P\begin{bmatrix}\lambda_{1} & 0 \\ 0 & \lambda_{2}\end{bmatrix}P^{-1} $$

Consequently, we have

$$ \Psi(t)=Pe^{Jt}=P\begin{bmatrix}e^{\lambda_{1}t} & 0 \\ 0 & e^{\lambda_{2}t}\end{bmatrix} $$

$$ \Psi^{-1}(t)=\begin{bmatrix}e^{-\lambda_{1}t} & 0 \\ 0 & e^{-\lambda_{2}t}\end{bmatrix}P^{-1} $$

In order to apply the variation of parameters formula, we will need to look at $\Psi^{-1}(t)\begin{bmatrix}k_{1} \\ k_{2}\end{bmatrix}=\begin{bmatrix}e^{-\lambda_{1}t} & 0 \\ 0 & e^{-\lambda_{2}t}\end{bmatrix}P^{-1}\begin{bmatrix}k_{1} \\ k_{2}\end{bmatrix}$. The last product in this formula will again give us some vector of constants. So we can write

$$ \Psi^{-1}(t)\begin{bmatrix}k_{1} \\ k_{2}\end{bmatrix}=\begin{bmatrix}e^{-\lambda_{1}t} & 0 \\ 0 & e^{-\lambda_{2}t}\end{bmatrix}\begin{bmatrix}l_{1} \\ l_{2}\end{bmatrix}=\begin{bmatrix}l_{1}e^{-\lambda_{1}t} \\ l_{2}e^{-\lambda_{2}t}\end{bmatrix} $$

for certain constants $l_{1}, l_{2}$. Now, let us apply the variation of parameters formula:

$$ \begin{aligned} x &= \Psi(t)\int\Psi^{-1}(t)\begin{bmatrix}k_{1} \\ k_{2}\end{bmatrix}dt \\ &= \Psi(t)\int\begin{bmatrix}l_{1}e^{-\lambda_{1}t} \\ l_{2}e^{-\lambda_{2}t}\end{bmatrix}dt \\ &= \Psi(t)\left(\begin{bmatrix}-\frac{l_{1}}{\lambda_{1}}e^{-\lambda_{1}t} \\ -\frac{l_{2}}{\lambda_{2}}e^{-\lambda_{2}t}\end{bmatrix}+\begin{bmatrix}c_{1} \\ c_{2}\end{bmatrix}\right) \\ &= P\begin{bmatrix}e^{\lambda_{1}t} & 0 \\ 0 & e^{\lambda_{2}t}\end{bmatrix}\left(\begin{bmatrix}-\frac{l_{1}}{\lambda_{1}}e^{-\lambda_{1}t} \\ -\frac{l_{2}}{\lambda_{2}}e^{-\lambda_{2}t}\end{bmatrix}+\begin{bmatrix}c_{1} \\ c_{2}\end{bmatrix}\right) \\ &= P\begin{bmatrix}-\frac{l_{1}}{\lambda_{1}} \\ -\frac{l_{2}}{\lambda_{2}}\end{bmatrix}+P\begin{bmatrix}c_{1}e^{\lambda_{1}t} \\ c_{2}e^{\lambda_{2}t}\end{bmatrix}. \end{aligned} $$

When $t$ tends to infinity, the second summand above goes to 0 since both $e^{\lambda_{1}t}$ and $e^{\lambda_{2}t}$ are decaying exponentials by assumption. The first summand is a constant. Therefore, the limit exists and it is independent of the initial values because it is independent of the values of the constants $c_{1}, c_{2}$.

In order to compute the limiting values $x_{1}(\infty)$ and $x_{2}(\infty)$, notice that the derivatives of the functions $x_{1}$ and $x_{2}$ will tend to 0 at infinity (to see this, we may for instance use the formula for $x$ obtained above). Therefore, by considering the original system of differential equations, we must have

$$ \begin{aligned} 0 &= A\begin{bmatrix}x_{1}(\infty) \\ x_{2}(\infty)\end{bmatrix}+\begin{bmatrix}k_{1} \\ k_{2}\end{bmatrix} \\ \begin{bmatrix}x_{1}(\infty) \\ x_{2}(\infty)\end{bmatrix} &= -A^{-1}\begin{bmatrix}k_{1} \\ k_{2}\end{bmatrix} \end{aligned} $$ $\square$