Systems of Linear Differential Equations

From Math Images

(Difference between revisions)
Jump to: navigation, search
Line 30: Line 30:
Each equation in our system is a linear combination of functions ''x''(''t'') and ''y''(''t'') and both have one derivative of either ''x'' or ''y''. Hence we have a linked system of first order linear differential equations.
Each equation in our system is a linear combination of functions ''x''(''t'') and ''y''(''t'') and both have one derivative of either ''x'' or ''y''. Hence we have a linked system of first order linear differential equations.
-
We can use eigentheory to solve for any system of first order linear differential equations. Ultimately, the outcome of the nuclear arms race depends only on the values of the eigenvalues (read the More Mathematical Explanation for details).
+
We can use eigentheory to solve for any system of first order linear differential equations. Indeed, the outcome of our nuclear arms race model depends on the eigenvalues (read the More Mathematical Explanation for details).
|ImageDesc===Solving the Two Dimensional System==
|ImageDesc===Solving the Two Dimensional System==

Revision as of 10:34, 26 July 2013

Image:inprogress.png

U.S. and Soviet Union nuclear warheads

Differential equations have always been a popular research topic due to their various applications. A system of linear differential equations is no exception; it can be used to model arms races, simple predator prey models, and more.


Contents

Basic Description

In 1945, the United States dropped atomic bombs on the Japanese cities Hiroshima and Nagasaki, ending World War II and establishing itself as a new superpower. The Soviet Union, while an ally of the United States during WWII, feared the bomb and spent the next few years developing their own atomic bomb, finally detonating their first nuclear weapon in 1949. This marked the beginning of a long and expensive arms race between the two powers. The competition for nuclear might, along with the countries' different ideologies (communism vs. capitalism), caused a political and psychological war during the second half of the 20th century, now known as the Cold War. During this period, both powers invested tremendous resources into their technology and weaponry, worried that the other was pulling ahead.

We will attempt to build a simple model to see the effects of the nuclear arms race. What type of model would fit best? Consider these two images.

Figure 1
Figure 1
Figure 2
Figure 2

Note that, starting in 1947, the Soviet Union rapidly increased its defense spending (Figure 2). The United States responded in kind beginning in 1948 (Figure 1). From that point on, the spending of the two powers pushed and pulled, staying within a fairly narrow range (the two higher points of United States defense spending around 1953 and 1968 are related to the Korean and Vietnam Wars). These fluctuating changes are connected with each power's analysis of both its own defense spending and that of the other power. The more the United States is already spending a tremendous amount on defense, the less willing it is to increase its defense spending for the following year. However, if the Soviet Union is spending a tremendous amount on defense, then the United States will increase its defense spending to not be left behind. In other words, we can analyze the defense spending of each country by analyzing the change in defense spending from year to year. Since derivatives are the mathematical tool to analyze change, we will build our model around derivatives.

Let x(t) and y(t) be the defense spending of the United States and the Soviet Union respectively at time t (in years). Then x′(t) is the derivative of x over t and it represents how much the U.S. spending changes over t years. Likewise, y′(t) is the derivative of y over t and it represents how much the Soviet spending changes over t years. As mentioned above, the rate of U.S. defense spending depends negatively on its current defense spending and positively on the Soviet Union's current defense spending. Correspondingly, the rate of Soviet defense spending depends negatively on its current defense spending and positively on the United States's current defense spending. The model's starting point is the year in which the arms race started, when t = 0. Let x0 and y0 be the initial(t = 0) defense spending of the United States and the Soviet Union respectively The following equations fit these conditions.

x'(t) = ax(t) + by(t)\quad\quad\quad x(0) = x_0 , where a is negative and b is positive.
y'(t) = cx(t) + dy(t)\quad\quad\quad\, y(0) = y_0 , where c is positive and d is negative.

This is a linked system of first order linear differential equations. In other words, equation x′(t) states that the United States lowers its budget by a dollars for every dollar it spent the previous year, and raises it by b dollars for every dollar in the Soviet Union's budget. Equation y′(t) states that the Soviet Union lowers its budget by d for every dollar it spent the previous year, and raises it by c dollars for every dollar in the United States's budget.

Note: We must use our model with a degree of skepticism because there are far more factors affecting the arms race than just the two countries' defense spending, such as available money/resources and the spending requirements of other important programs. Nevertheless, our model is useful for analyzing the general outline of the Cold War arms race and predicting its outcomes.

Introduction to Systems of Linear Differential Equations

We clarify a few terms:

  • A differential equation is an equation that contains both function(s) and their derivatives.
  • A nth order differential equation only involves up to the nth derivative of any function.
  • A linked system of differential equations has at least one equation that involves different functions or different derivatives of functions.
  • The expressions of linear equations are linear combinations of functions.

Each equation in our system is a linear combination of functions x(t) and y(t) and both have one derivative of either x or y. Hence we have a linked system of first order linear differential equations.

We can use eigentheory to solve for any system of first order linear differential equations. Indeed, the outcome of our nuclear arms race model depends on the eigenvalues (read the More Mathematical Explanation for details).

A More Mathematical Explanation

Solving the Two Dimensional System

In order to understand how a system of linear differential equ [...]

Solving the Two Dimensional System

In order to understand how a system of linear differential equations is useful in making sense of the Cold War Arms Race, we first need to understand the general system.

The general system of linear differential equations in two dimensions is

x'(t) = ax(t) + by(t) \quad\quad\quad x(0) = x_0
y'(t) = cx(t) + dy(t) \quad\quad\quad\, y(0) = y_0 .

Since we are solving the general system, the coefficients a, b, c, and d are elements of ALL numbers (including imaginary numbers).

We can write this system with matrices in the form

\vec{u}'=A\vec{u},    where     A=\begin{bmatrix} a & b \\ c & d \end{bmatrix},     \vec{u} = \begin{bmatrix} x \\ y \end{bmatrix},    and     \vec{u}' = \begin{bmatrix} x' \\ y' \end{bmatrix}.

In order to gain an understanding on how to solve this equation, we consider the one dimensional case.

The One Dimensional Case

If the two dimensional case involved the equation :\vec{u}' = A\vec{u} with A a 2x2 matrix, then the A in the equation of the one dimensional case must be a 1x1 matrix, or just a constant. Consequently,  \vec{u} will actually be u instead; it is no longer a vector. Hence the one dimensional equation is

u' = ku, where k is a real number.

Solving this requires basic knowledge of calculus. We can write this as

u' = \frac{du}{dt} = ku (remember that u is a function of t)
\frac{1}{u} du = kdt (rearranged variables).

Integrate both sides:

\int \frac{1}{u} du = \int kdt
ln(u) = kt + c
e^{ln(u)} = e^{kt+c}
u = e^{kt}e^{c}
u = Ce^{kt}. (So C = ec.)

Now we can apply the initial condition. Note that

u(0) = C(1) = C, so u(0) = C.

Thus the solution to the one dimensional case is

u(t) = u_0e^{kt}.

Back to the Two Dimensional System

We can make some connections between the one and two dimensional system. We see that A in the two dimensional case is like the k in the one dimensional case. The initial condition is also a vector instead of a scalar. Thus the solution to the one dimensional case suggests that the solution to our system is \vec{u}(t) = e^{At}\vec{u}_0. We will explore the concept of raising e to a matrix later. Since the solution to the one dimensional case is u = Cekt, a good guess of the solution to the two dimensional case is  \vec{u}(t) = e^{\lambda t} \vec{v} for some λ and \vec{v}. This seemingly random equation will start to make sense once we plug our guess into the original equations and find λ and \vec{v} that work. We write  \vec{v} = \begin{bmatrix} p \\ q \end{bmatrix} .

We verify our guess by plugging it into our original system:

 p \lambda e^{\lambda t} = x'(t) = ap e^{\lambda t} + bq e^{\lambda t}
 q \lambda e^{\lambda t} = y'(t) = cp e^{\lambda t} + dq e^{\lambda t}

After we divide by eλt (which is never 0), we can rewrite our system as  \lambda \vec{v} = A \vec{v}. Thus, our guess is correct if and only if λ is an eigenvalue and  \vec{v} is an eigenvector of A.

Now we must account for the fact that a 2x2 matrix can have two eigenvalues λ1 and λ2. Since differentiation is a linear operator, we know that the solution to our system is linear. We write the general solution as a linear combination

 \vec{u} = e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_2 t} \vec{v}_2 .

We now apply our initial condition. When t = 0, our general solution becomes

 \vec{u}(0) = \vec{v}_1 + \vec{v}_2 .

It is rare that  \vec{v}_1 and  \vec{v}_2 add up to the initial condition, so how do we solve this? Ideally, we multiply the two vectors by real number constants c1 and c2, but we must check that this still works for our system.


By multiplying constants to each term of our solution expression, our new proposed solution is

 \vec{u} = c_1 e^{\lambda_1 t} \vec{v}_1 + c_2 e^{\lambda_2 t} \vec{v}_2 .

We verify this by plugging into our matrix equation \vec{u} = A\vec{u}.

 \begin{align} \vec{u}' &= \lambda_1 c_1 e^{\lambda_1 t} \vec{v}_1 + \lambda_2 c_2 e^{\lambda_2 t} \vec{v}_2 \\
&= A\vec{u} = A(c_1 e^{\lambda_1 t} \vec{v}_1 + c_2 e^{\lambda_2 t} \vec{v}_2) \\
&= A(c_1 e^{\lambda_1 t} \vec{v}_1) + A(c_2 e^{\lambda_2 t} \vec{v}_2) \text{ (Matrix multiplication is distributive.)} \\
&= c_1 e^{\lambda_1 t} (A \vec{v}_1) + c_2 e^{\lambda_2 t} (A \vec{v}_2) \text{ (Scalars pass through.)} \\
&= c_1 e^{\lambda_1 t} (\lambda_1 \vec{v}_1) + c_2 e^{\lambda_2 t} (\lambda_2 \vec{v}_2) \\
&= \lambda_1 c_1 e^{\lambda_1 t} \vec{v}_1 + \lambda_2 c_2 e^{\lambda_2 t} \vec{v}_2 \quad \blacksquare
\end{align}

In fact, multiplying constants to the terms is equivalent to finding a linear combination of our basis vectors e^{\lambda_1 t} \vec{v}_1 and e^{\lambda_2 t} \vec{v}_2. Remember that differentiation is a linear transformation, so the linear combinations of existing solutions are naturally solutions as well.


Now that we have a more refined general solution, we need to know how to solve for the constants. Our initial condition now becomes

 \vec{u}(0) = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} = c_1\begin{bmatrix} p_1 \\ q_1 \end{bmatrix} + c_2\begin{bmatrix} p_2 \\ q_2 \end{bmatrix} .

We can rewrite this as

 \begin{bmatrix} p_1 & p_2 \\ q_1 & p_2 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} , which can be solved using Gaussian elimination.

Thus we have our particular solution for any initial condition. With the constants solvable, our general solution is

 \vec{u} = c_1 e^{\lambda_1 t} \vec{v}_1 + c_2 e^{\lambda_2 t} \vec{v}_2 .

Arms Race Example

We will solve a system that follows the rules of our arms race model. Since the real Cold War arms race is hard to model, we will make assumptions on the actions of the United States and the Soviet Union. Assume that the United States lowers its budget by 3 dollars for every dollar it spent the previous year, and raises it by 6 dollars for every dollar in the Soviet Union's budget. Further assume that the Soviet Union lowers its budget by d for every dollar it spent the previous year, and raises it by c dollars for every dollar in the United States's budget. Finally assume that the United States defense spending is 130 billion dollars and the Soviet Union defense spending is 150 billion dollars at t = 0.

x'(t) = -3x(t) + 6y(t) \quad\quad\quad x(0)=13
y'(t) = 5x(t) - 2y(t) \quad\quad\quad\;\;\; y(0)=15.

All of the constants are made up. The values of x and y are in tens of billions of dollars.

We can rewrite this as

 \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} -3 & 6 \\ 5 & -2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} .

To find the eigenvalues of A, remember that λ is an eigenvalue if and only if the determinant of (A - λI) is 0:

 \begin{align}
|A-\lambda I| &= \begin{vmatrix} -3-\lambda & 6 \\ 5 & -2-\lambda \end{vmatrix} \\
&= (-3-\lambda)(-2-\lambda) - 30 \\
&= \lambda^2 + 5\lambda - 24 \\
&= (\lambda + 8)(\lambda - 3) = 0 \end{align}

So our eigenvalues are λ = -8, 3. Using the eigenvalues, find the eigenvectors:

For λ1 = -8:  A-\lambda I = \begin{bmatrix} 5&6 \\ 5&6 \end{bmatrix} , so  \vec{v}_1 = \begin{bmatrix} -6 \\ 5 \end{bmatrix} .
For λ2 = 3:  A-\lambda I = \begin{bmatrix} -6&6 \\ 5&-5 \end{bmatrix} , so  \vec{v}_2 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} .

Now we will find the constants for our initial condition. Remember that we can use Gaussian elimination to find them.

Start with  \left[\begin{array}{rr|r} -6 & 1 & 13 \\ 5 & 1 & 15 \end{array}\right] , after Gaussian elimination -->  \left[\begin{array}{rr|r} 1 & 0 & \frac{2}{11} \\[6pt] 0 & 1 & \frac{155}{11} \end{array}\right] , so c1 = 211 and c2 = 15511.

Finally, we just plug all of our values back into our general solution to get our particular solution.

 \vec{u} = \begin{bmatrix} x \\ y \end{bmatrix} = \frac{2}{11} e^{-8t} \begin{bmatrix} -6 \\ 5 \end{bmatrix} + \frac{155}{11} e^{3t} \begin{bmatrix} 1 \\ 1 \end{bmatrix}

Thus we have the equations

 x(t) = \frac{-12}{11} e^{-8t} + \frac{155}{11} e^{3t}
 y(t) = \frac{10}{11} e^{-8t} + \frac{155}{11} e^{3t} .

These equations satisfy both the system of differential equations and the initial condition (check if doubtful).

We can now use these equations to help analyze the results of our hypothetical arms race. The results will happen many years later, so we can get a good estimate by taking the limit of x(t) and y(t) as t goes to ∞.

 \lim_{t \to \infty}x(t) = \frac{-12}{11} e^{-8(\infty)} + \frac{155}{11} e^{3(\infty)} = \frac{-12}{11} (1) + \frac{155}{11} (\infty) = \infty
 \lim_{t \to \infty}y(t) = \frac{10}{11} e^{-8(\infty)} + \frac{155}{11} e^{3(\infty)} = \frac{10}{11} (1) + \frac{155}{11} (\infty) = \infty

The results of this hypothetical arms race are disastrous; they indicate that tensions between the United States and Soviet will cause the arms race to spiral out of control, with both countries investing more and more of their resources into defense. The outcome of that is the economic collapse of at least one of the two powers. Thus, both countries should lessen the military and nuclear competition to avoid national turmoil.

We can represent our arms race example visually:

Figure 3
Figure 3

This is a vector field of all the paths in our made up model. The horizontal axis represent U.S. defense spending, and the vertical axis represent Soviet defense spending. In order to find the solution path for our initial condition, find the point (13, 15) on the plot and follow the arrows. While the vector field of all four quadrants is displayed, we are mainly interested in the first quadrant; it makes no sense for either country to have negative defense spending. Note that regardless of where the initial condition is in the first quadrant, the arrows lead to positive infinity . This is an unstable system, or a system that will reach infinity. Mathematically, the only factor that determines the stability of a system is its eigenvalues (more about this later).

Figure 3 also shows the vital role of the eigenvectors. Using Geometer's Sketchpad, we calculated the slopes of the two predominant lines. Line AC corresponds to \vec{v}_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} ; they both have a slope of 1. Similarly, line AB corresponds to \vec{v}_2 = \begin{bmatrix} -6 \\ 5 \end{bmatrix} ; they both have a slope of 5-6, or -0.833. These eigenvectors dictate the possible paths our model can go through given different initial conditions.

The Complex Eigenvalue Case

So far, we have explored the systems with real eigenvalues. Now we will investigate a system that will contain complex eigenvalues.

In physics, springs are modeled by an equation known as Hooke's law. In an ideal environment (no air resistance, no friction, no heat generated by movement, etc.), an object of mass m, when set into motion attached to a spring, should oscillate forever. Hooke's law states that the mass's displacement x from the equilibrium and the force F the spring is exerting on the mass at time t is linearly related by a constant k, the spring constant. This constant quantifies the stiffness of the spring. Furthermore, this linear relationship is negative; the force is acting in a direction opposite of the displacement. We can write Hooke's law as

F = -kx.

We combine Hooke's law with Newton's second law of motion, F = ma, to get

F = ma = -kx, so a = -\frac{k}{m} x.

Remember that acceleration a is the second derivative of displacement x. Thus we observe that Hooke's law is actually the differential equation

x'' = -\frac{k}{m} x.

For the sake of simplificity, we assume k = m = 1. The only function in our differential equation is the displacement x, and it's a function of time t. The goal is to find the equation for x in terms of t. Note that our equation is actually a second order differential equation. So how do we solve this? We transform the equation into two first order differential equations by introducing a dummy variable y. Set y = x', so x'' = y' = -x. In physics, y is the velocity of the mass at time t (derivative of displacement is velocity). At time t = 0, the displacement is x0 and the velocity is always 0 (the velocity of the mass the instant it is released is 0). Now we have the system of first order linear differential equations

 x' = y \quad\quad\quad x(0) = x_0
 y' = -x \quad\quad\, y(0) = 0 .

From here, solving this is the same as solving any other system of first order linear differential equations. We write this as the matrix equation

 \vec{u}' = \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = A \vec{u} .

Solve for the eigenvalues of A:

 |A-\lambda I| = \begin{vmatrix} 0-\lambda & 1 \\ -1 & 0-\lambda \end{vmatrix} = \lambda^2 + 1 = 0

So our eigenvalues are λ = i, -i. Using these eigenvalues, we find the eigenvectors.

For λ1 = i:  A-\lambda I = \begin{bmatrix} -i & 1 \\ -1 & -i \end{bmatrix} , so \vec{v}_1 = \begin{bmatrix} 1 \\ i \end{bmatrix} .
For λ2 = -i:  A-\lambda I = \begin{bmatrix} i & 1 \\ -1 & i \end{bmatrix} , so \vec{v}_2 = \begin{bmatrix} 1 \\ -i \end{bmatrix} .

Now we will find the constants for our initial condition. Since we do not have a specific starting value for x, we will solve the system of equations instead of use Gaussian Elimination. When t = 0, we have

 x(0) = x_0 = c_1 + c_2 and  y(0) = 0 = ic_1 - ic_2 .

Looking at the second equation, we see that since ic1 = ic2, c1 = c2. Since the constants are equal, we will just call both constants c. We plug this fact into the first equation and we get x0 = 2c, so c = x02. Our general solution is then

 \vec{u} = \begin{bmatrix} x \\ y \end{bmatrix} = \frac{x_0}{2} \left(e^{it} \begin{bmatrix} 1 \\ i \end{bmatrix} + e^{-it} \begin{bmatrix} 1 \\ -i \end{bmatrix} \right) .

Remember that we are only interested in x, the displacement of the mass. The general equation for x is

 x(t) = \frac{x_0}{2}(e^{it}+e^{-it}) .

Our solution still has complex numbers. We use Euler's formula to understand what raising e by complex numbers mean. Euler's formula states

 e^{ix} = \cos x + i\sin x for any x.

We now use Euler's formula in equation x:

 \begin{align} x(t) &= \frac{x_0}{2}(e^{it}+e^{-it}) \\
&= \frac{x_0}{2}(e^{it}+e^{i(-t)}) \\
&= \frac{x_0}{2}(\cos t + i\sin t + \cos(-t) + i\sin(-t)) \\
&= \frac{x_0}{2}(\cos t + i\sin t + \cos t - i\sin t) \text{ (Recall that } \cos(-x) = \cos x \text{ and } \sin(-x) = -\sin x \text{)} \\
&= \frac{x_0}{2}(2 \cos t) = x_0 \cos t \text{ .}
\end{align}

The form of our solution is a constant multiplied by some type of function, quite similar to the real eigenvalue case. The main difference is the function; the function for the complex eigenvalue case is trigonometric while the function for the real eigenvalue case is exponential. This provides insight to the nature of complex numbers. Complex numbers might not be so "imaginary" after all; they seem to reside in a realm parallel to real numbers.

Just like with real eigenvalues, we can use a vector field to visualize our solution:

Figure 4
Figure 4

This solution is very interesting. Unlike the arms race model, we are interested in all four quadrants, since velocity and displacement can be negative. Note that regardless of where we start (except the origin), we will just orbit the origin in a perfect circle. Does that make sense? In an ideal spring system, the mass moves back and forth forever. Furthermore, the displacement and the velocity are negatively related. The velocity is at a maximum when the displacement is 0, and the displacement is at a maximum when the velocity is 0. Note that the vector field portrays both properties. Analyzing the stability of the system, the system does not end up at infinity. We call this type of system neutrally stable, neutrally because it doesn't approach the origin and stable because it doesn't reach infinity. The different stabilities of the arms race model and Hooke's law gives more insight to how important eigenvalues are to the stability of a system.

Stability of the Two Dimensional System

We judge the stability of a system by what happens when time goes to infinity. Each system also has an equilibrium. In our arms race example, the equilibrium is just the origin (in most cases it's the origin). An unstable system will go to infinity as time goes to infinity, and a stable system will not. There are two types of stable systems: asymptotically stable systems and neutrally stable systems. Neutrally stable systems circles in a closed path around the equilibrium (like Hooke's law). Asymptotically stable systems approach the equilibrium as time goes to infinity. The arms race example that we worked through is a case of an unstable system with a saddle point equilibrium. As mentioned above, the eigenvalues of a system are the only things required to determine the system's stability. We use our arms race example to help explain this. Since we judge the stability of a system by what happens when time goes to infinity, we can take the limit of our solution as time goes to infinity. Note that the only factor that affected the outcome of the solution (whether it converges or diverges) is the exponents on the e′s, and the exponents of the e′s are based on the eigenvalues. Thus, we can explore the relationship between the eigenvalues of our system and its stability.

For a 2x2 matrix, another way of writing the characteristic polynomial is

 \lambda^2 - trA\lambda + detA = 0 .

Thus we know by the quadratic formula that

 \lambda_1 = \frac{1}{2} \left( trA + \sqrt{(trA)^2 - 4detA} \right)
 \lambda_2 = \frac{1}{2} \left( trA - \sqrt{(trA)^2 - 4detA} \right) .

With these in mind, we will analyze each type of stability and derive inequalities for λ in terms of detA and trA.

Stable Systems

Asymptotically Stable

When we take the limit of our solution as time goes to infinity, the only factor that determines if the solution will go to infinity is the exponents on e, or the eigenvalues. For asymptotically stable systems, we want the system to approach 0. That means we want the real component of our eigenvalues to be negative. Note that trA must be negative. For the real eigenvalue case, we look at λ1 because if the real component of λ1 is negative, then the real component of λ2 is negative as well. We observe

 \lambda_1 = \frac{1}{2} \left( trA + \sqrt{(trA)^2 - 4detA} \right)<0
 trA < -\sqrt{(trA)^2 - 4detA}
 (trA)^2 > (trA)^2 - 4detA (Squared both sides, note that since trA < 0, the inequality sign changes.)
 detA > 0 .

This condition also works for the complex eigenvalue case since it requires detA to be greater than 0 in order for the eigenvalues to be complex. The only difference between real and complex eigenvalues otherwise is how it converges to the equilibrium. For complex eigenvalues, the vector field actually spirals into the equilibrium. For real eigenvalues, the vector approaches the origin in some other curve. Regardless, we conclude that a system is asymptotically stable if and only if detA > 0 and trA < 0.

Neutrally Stable

Hooke's law is an example of a neutrally stable system. A property of a neutrally stable system is that it must have purely imaginary eigenvalues. The existence of a real number within the eigenvalues causes the system to either go to equilibrium or go to infinity. In order to have purely imaginary eigenvalues, we must have trA = 0 and the expression under the square root to be less than zero. Since trA = 0, we have -4detA < 0, so detA > 0. Thus a system is neutrally stable if and only if detA > 0 and trA = 0.

Unstable Systems

Unstable systems approaches infinity as time passes, so the real component of the eigenvalue must be positive. We look at λ2 this time since if the real component of λ2 is positive, the real component of λ1 is positive as well. For the real eigenvalue case, we have

 \lambda_2 = \frac{1}{2} \left( trA - \sqrt{(trA)^2 - 4detA} \right) > 0
 trA > \sqrt{(trA)^2 - 4detA}
 (trA)^2 > (trA)^2 - 4detA (Squared both sides. Since both are positive, no need to change inequality sign.)
 detA > 0 .

We get the same result as the asymptotically stable case (except trA > 0, and we stated before that this condition is also required for the complex eigenvalue case. Thus a system is unstable if and only if detA > 0 and trA > 0.

Saddle Point

A special case arises when we observe the region around the equilibrium and notice how half of the region is unstable and the other half is unstable. In this case, the equilibrium is called a saddle point. The eigenvalues for this case are of opposite sign, one is positive and the other negative. Hence the eigenvalues must also be real, since complex eigenvalues cannot have different real components (trA is fixed). The arms race example is a great example of a saddle point. Observe that at the origin, the vectors from quadrant II and IV seem to be stable while the vectors from quadrant I and III seem to be unstable. Regardless of how half the vector field seems to be stable, we still characterize this case as unstable. Remember how in finding the limits of our solution, the result reached infinity. While the part of the expression with the negative eigenvalue converged to a number, the other part went to infinity, forcing the entire system to infinity. Thus the only way for a saddle point system to be stable is if the initial condition is a scalar multiple of the negative eigenvalue's eigenvector. Unlike the previous few cases, trA has no obvious restriction. We split this into 3 cases: trA < 0, trA = 0, trA > 0.

Case 1 (trA < 0)

λ2 is always negative, so we focus on making λ1 positive.

 \lambda_1 = \frac{1}{2} \left( trA + \sqrt{(trA)^2 - 4detA} \right) > 0
 \sqrt{(trA)^2 - 4detA} > -tA
 (trA)^2 - 4detA > (trA)^2 (Squared both sides. Since -trA is positive, we do not need to change sign.)
 detA < 0

The only condition for case 1 is detA < 0.

Case 2 (trA = 0)

We rewrite λ1 and λ2 taking into account trA = 0.

 \lambda_1 = \frac{1}{2} sqrt{-4detA}
 \lambda_2 = -\frac{1}{2} sqrt{-4detA}

We need real numbers, so detA = 0. With that condition, λ1 is automatically positive and λ2 is automatically negative. Thus the only condition for case 2 is once again detA < 0.

Case 3 (trA > 0)

λ1 is always positive, so we focus on making λ2 negative.

 \lambda_2 = \frac{1}{2} \left( trA - \sqrt{(trA)^2 - 4detA} \right) < 0
 trA < \sqrt{(trA)^2 - 4detA}
 (trA)^2 < (trA)^2 - 4detA (Squared both sides. Since trA is positive, we do not need to change sign.)
 detA < 0

The only condition for case 3 is detA < 0.


Thus the only condition to create a saddle point is detA < 0.

Conclusion

We summarize our findings here. The conditions for each case is listed below.

Stable System

  • Asymptotically stable: detA > 0 and trA < 0.
  • Neutrally stable: detA > 0 and trA = 0.

Unstable

  • No saddle point: detA > 0 and trA > 0.
  • With saddle point: detA < 0.

We can use a graph to summarize all our results.

Figure 5
Figure 5

From the image, we see that all of the positive stability cases is covered. The image was actually more specific than us, breaking asymptotically stable and unstable with no saddle point into real and complex eigenvalues. This is not too challenging; just set the expression under the square root for λ1 and λ2 to less than zero for complex eigenvalues or greater than zero for real eigenvalues.

The More Rigorous Proof

Requires a competent understanding of infinite series, Taylor Series, and Matrix algebra.

We have mentioned above that since the one-dimensional case is u=u_0 e^{at} for initial condition u0, the solution to our system is \vec{u}(t) = e^{At}\vec{u}_0. Using Taylor Series, we define

e^A = \sum_{n=0}^{\infty}\frac{A^n}{n!} for a square kxk matrix A. Note: A0 = I.

Now we have to show that \vec{u} = e^{tA}\vec{u}_0 solves \vec{u}'=A\vec{u}:

\begin{align} 
\vec{u}' = \frac{d}{dt} (e^{tA} \vec{u}_0) & = \frac{d}{dt} \left( \left(\sum_{n=0}^{\infty}\frac{tA^n}{n!} \right) \vec{u}_0 \right) \\
& = \left( \frac{d}{dt}\sum_{n=0}^{\infty}\frac{t^nA^n}{n!} \right)\vec{u}_0 \text{ (}\vec{u}_0 \text{ is a constant)} \\
& = \left( \sum_{n=0}^{\infty} \frac{d}{dt} \left( \frac{A^n}{n!} \right) \right) \vec{u}_0 \\
& = \left( \sum_{n=1}^{\infty}\frac{n t^{n-1} A^n}{n!} \right) \vec{u}_0 \text{ (Note the index change)} \\ 
& = \left( A \sum_{n=1}^{\infty} \frac{t^{n-1}A^{n-1}}{(n-1)!} \right) \vec{u}_0 \\
& = \left( A \sum_{j=0}^{\infty} \frac{(tA)^j}{j!} \right) \vec{u}_0 \\
& = Ae^{tA} \vec{u}_0 = A \vec{u} \quad \blacksquare
\end{align} .

So we know that \vec{u} = e^{tA}\vec{u}_0 solves \vec{u}'=A\vec{u}. But what does e^{tA}\vec{u}_0 mean? We use diagonalization. If the sum of the dimensions of a square matrix A's eigenspaces is equal to dim(A), then we can write A as A=PLP^{-1}, where P has the eigenvectors of A as its columns and L is a diagonal matrix with the eigenvalues of A as its diagonal entries. The eigenvalues of A in L must match up with the eigenvectors in P, meaning that if we choose a random eigenvector/column (say this is the kth eigenvector/column) of P, then the eigenvalue in the kth column of L must correspond to that eigenvector. Let us first consider etA.

 \begin{align} e^{tA} = e^{t(PLP^{-1})} = e^{P(tL)P^{-1}} &= \sum_n \frac{(PtLP^{-1})^n}{n!} \\
&= \sum_n \frac{P(tL)^nP^{-1}}{n!} \text{  (} (PLP^{-1})^n = P(L)^nP^{-1} \text{ is a property of diagonalization.)} \\
&= P \left(\sum_n \frac{(tL)^n}{n!} \right) P^{-1} \\
&= P \left(\sum_n \begin{bmatrix} (\lambda_1 t)^n/n! & & & \\ & (\lambda_2 t)^n/n! & & \\ & & \ddots & \\ & & & (\lambda_k t)^n/n! \end{bmatrix} \right) P^{-1} \\
&= P \begin{bmatrix} \displaystyle\sum\limits_{n} (\lambda_1 t)^n/n! & & & \\ & \displaystyle\sum\limits_{n} (\lambda_2 t)^n/n! & & \\ & & \ddots & \\ & & & \displaystyle\sum\limits_{n} (\lambda_k t)^n/n! \end{bmatrix} P^{-1} \\
&= P \begin{bmatrix} e^{\lambda_1 t} & & & \\ & e^{\lambda_2 t} & & \\ & & \ddots & \\ & & & e^{\lambda_k t} \end{bmatrix} P^{-1} \text{  (Remember Taylor Series.)}
\end{align}

Before we continue, consider P^{-1}\vec{u}_0. Remember in solving the two dimensional case, we made up constants c1 and c2 to satisfy the initial condition. For this more rigorous proof, we define

 \vec{c} = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_k \end{bmatrix} .

We solve for the constants by knowing that P \vec{c} = \vec{u}_0 , so \vec{c} = P^{-1} \vec{u}_0. Thus

 \begin{align} e^{tA}\vec{u}_0 &= P \begin{bmatrix} e^{\lambda_1 t} & & & \\ & e^{\lambda_2 t} & & \\ & & \ddots & \\ & & & e^{\lambda_k t} \end{bmatrix} P^{-1}\vec{u}_0 \\
&= P \begin{bmatrix} e^{\lambda_1 t} & & & \\ & e^{\lambda_2 t} & & \\ & & \ddots & \\ & & & e^{\lambda_k t} \end{bmatrix} \vec{c} \\
&= P \begin{bmatrix} c_1 e^{\lambda_1 t} \\ c_2 e^{\lambda_2 t} \\ \vdots \\ c_k e^{\lambda_k t} \end{bmatrix} = \sum_{i=1}^k c_i e^{\lambda_i t} \vec{v}_i \text{ .} \end{align}

When k = 2, note that this final expression expands into our two dimensional general solution. Thus the solution to any system of differential equations that can be written in the form  \vec{u}' = A\vec{u} is a linear combination of exponential functions with the eigenvectors and eigenvalues of A.


Why It's Interesting

We have shown how linear differential equations are highly applicable. They can model basic arms races and Hooke's law with a high degree of accuracy. We can use linear differential equations for any system that has derivatives and functions written in linear combinations of one another. Some include heat flow, bank interest, basic predator-prey models, and population through time. Through the Hooke's law case, we saw that our method for solving linear differential equations also works for second order equations. In fact, we can write any k order differential equation as a system of k first-order differential equations.

Even if we ignore their applicability, linear differential equations are fascinating topics to study. They help show how differentiation is a linear transformation by writing the system as a matrix equation and deriving a linear combination as the solution. Furthermore, they emphasize the importance of eigentheory. Without eigentheory, we could have solved the equations, much less determine the stability of our system. Eigentheory is not just a matrix stretching or shrinking a vector, it contains one of the core ideas of linear algebra - scalar multiplication. Coupled with the other core idea of linear algebra, linear combinations, eigentheory is an essential tool for linear algebra and all of mathematics.


Teaching Materials

There are currently no teaching materials for this page. Add teaching materials.




References






If you are able, please consider adding to or editing this page!

Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
[[Category:]]





Personal tools