Eigenvalues and Eigenvectors

From Math Images

Jump to: navigation, search
This is a Helper Page for:
Social Networks
Systems of Linear Differential Equations

Contents

Introduction

If you are familiar with matrix multiplication, you know that a matrix times a vector produces a vector, and most of these products turn out to be different from the original vector,

i.e. A\vec{v}=\vec{x} where \vec{v}\neq\vec{x}

What if I told you that there are certain vectors which, when multiplied to a matrix, produce the same vector multiplied by a scalar?

i.e. A\vec{v}=\lambda\vec{v}, where \lambda is a scalar.

Amazing right? These vectors are called eigenvectors. We can figure out how to compute these vectors for any square (rows and columns are the same) matrix!

Definition: For a square matrix A and a non-zero vector \vec{v}, \vec{v}is an eigenvector of A if and only if \lambda\vec{v}=A\vec{v}, where \lambda is an eigenvalue of A associated with \vec{v}.

Before we begin our analysis, here are some concepts you'll need to be familiar with:

  • The determinant of a 2\times2 matrix matrix, A=\begin{bmatrix} a & b \\ c & d \end{bmatrix} is defined as follows:

\text{det}(A)=ad-bc, where a, b, c and d are the entries of matrix A.

  • For the more generalk\timesk matrix, the determinant can be reduced by minors to a set of 2×2 cases, so that: \left|\begin{array}{ccccc}a_{11}&a_{12}&a_{13}&\cdots&a_{1k}\\a_{21}&a_{22}&a_{23}&\cdots&a_{2k}\\a_{31}&a_{32}&a_{33}&\cdots&a_{3k}\\\vdots&\vdots&\vdots&\ddots&\vdots\\a_{1k}&a_{2k}&a_{3k}&\cdots&a_{kk}\end{array}\right|=a_{11}\left|\begin{array}{cccc}a_{22}&a_{23}&\cdots&a_{2k}\\\vdots&\vdots&\ddots&\vdots\\a_{k2}&a_{k3}&\cdots&a_{kk}\end{array}\right|-a_{12}\left|\begin{array}{cccc}a_{21}&a_{23}&\cdots&a_{2k}\\\vdots&\vdots&\ddots&\vdots\\a_{k1}&a_{k3}&\cdots&a_{kk}\end{array}\right|+

\cdots\pm a_{1k}\left|\begin{array}{cccc}a_{21}&a_{22}&\cdots&a_{2(k-1)}\\\vdots&\vdots&\ddots&\vdots\\a_{k1}&a_{k2}&\cdots&a_{k(k-1)}\end{array}\right|

  • Invertibility: A square matrix A is invertible if and only if the determinant of A is nonzero. Or,  \text{det}(A)= 0 \iff A \text{ is not invertible} . If a square matrix A is invertible, then  A\vec{v} = \vec{k} has only one solution, \vec{v}, for all \vec{k}.

Eigenvalues

In this section, we will see how we can construct a new matrix based on A to help us find A’s eigenvectors. Let's try to find the eigenvector given that we know the definition:

 A\vec{v}=\lambda\vec{v}
 A\vec{v}-\lambda\vec{v}=\vec{0}
 A\vec{v}-\lambda I \vec{v}=\vec{0}, where I is the identity matrix.
(A-\lambda I)\vec{v}=\vec{0}

So we have B\vec{v}=\vec{0}, where B is the square matrix (A - λI).

If B is invertible, then the only solution for \vec{v} in B\vec{v}=\vec{0} is the zero vector. But \vec{v} cannot be zero, based on the definition of an eigenvector, so we know that the zero vector cannot be the only solution. Therefore, B is not invertible. Since B is not invertible, we know that its determinant will equal zero!

We have to ask ourselves: what values can make B have a determinant of zero? Well we can take the determinant of (A-\lambda I) (which B represented) and see for what \lambda that determinant will equal zero. Since the only unknown value is λ and the determinant of a n×n matrix is just a nth degree polynomial (we call this the characteristic polynomial), we know this method has a solution for λ. The values of \lambda are called eigenvalues, and we will work through a simple example.

Example

Suppose we have

 A = \begin{bmatrix} 4 & -5 \\ 2 & -3 \end{bmatrix} .

To find the eigenvalues of A, we find the determinant of (A - λI):

 \begin{align}
|A-\lambda I| &= \begin{vmatrix} 4-\lambda & -5 \\ 2 & -3-\lambda \end{vmatrix} \\
&= (4-\lambda)(-3-\lambda) + 10 \\
&= \lambda^2 - \lambda - 2 \\
&= (\lambda - 2)(\lambda + 1) = 0 \\
\end{align}

So our eigenvalues are 2 and -1. We write these as λ1 = 2 and λ2 = -1. In general, an n×n matrix will have n eigenvalues because an nth degree polynomial will typically have n solutions (given that there are no repeated solutions).

The General 2×2 Case

Consider the general 2×2 matrix

 A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} .

We find the eigenvalues of A by finding the determinant of (A - λI):

 \begin{align}
|A-\lambda I| &= \begin{vmatrix} a-\lambda & b \\ c & d-\lambda \end{vmatrix} \\
&= (a-\lambda)(d-\lambda) - bc \\
&= \lambda^2 - (a+d)\lambda + (ad - bc) = 0 \end{align}

Using the quadratic formula, we see that our eigenvalues are

\lambda_1 = \frac{1}{2} \left( a+d + \sqrt{(a+d)^2 - 4(ad-ac)} \right)
\lambda_2 = \frac{1}{2} \left( a+d - \sqrt{(a+d)^2 - 4(ad-ac)} \right).

Note that we will typically have two eigenvalues for a 2×2 matrix (except in the case when we have repeating solutions), since we're solving for a second-degree polynomial. Likewise, an n×n matrix will probably have n eigenvalues because an nth degree polynomial will typically have n solutions (once again given that there are no repeated solutions).

Another way of writing the Characteristic Polynomial

We define the trace of a matrix as the sum of its diagonal entries. So the trace of A (or tr(A)) is a + d. Remember that the determinant of A is ad - bc. We can rewrite the characteristic polynomial of a general 2×2 matrix as

\begin{align}0 &= \lambda^2 - (a+d)\lambda + (ad - bc) \\
&= \lambda^2 - \text{tr} (A) \lambda + \text{det} (A) = 0 \end{align}

Eigenvectors

After finding the eigenvalues of A, we must find the corresponding eigenvectors. Since we already have the equation (A - \lambda I)\vec{v} = \vec{0} available, why don't we use it? Plug one of the value of λ for A into the equation and find vector(s) that satisfy the equation.

We continue our previous example to illustrate this idea.

Example

Remember that for our matrix

 A = \begin{bmatrix} 4 & -5 \\ 2 & -3 \end{bmatrix} , we have λ1 = 2 and λ2 = -1.

In this case, we will have two eigenvectors, one for each eigenvalue. Note that this is not always true. If for a n×n matrix there are fewer than n eigenvalues (repeated solutions for our polynomial), then at least one eigenvalue will have more than one eigenvector that corresponds to it. However in our case, we have two eigenvalues for a 2×2 matrix, so we know we will definitely have one eigenvector per eigenvalue.

We plug our λ's into the equation (A - \lambda I)\vec{v} = \vec{0} and find our eigenvectors.

λ1 = 2:  \begin{bmatrix} 4-2 & -5 \\ 2 & -3-2 \end{bmatrix} = \begin{bmatrix} 2 & -5 \\ 2 & -5 \end{bmatrix} Note that the two rows of the matrix are identical. So what vector, when multiplied by this matrix, gives the zero vector? One that could work is \vec{v}_1 = \begin{bmatrix} 5 \\ 2 \end{bmatrix}. This is our first eigenvector.
λ2 = -1:  \begin{bmatrix} 4-(-1) & -5 \\ 2 & -3-(-1) \end{bmatrix} = \begin{bmatrix} 5 & -5 \\ 2 & -2 \end{bmatrix} While the two rows of the matrix aren't identical, they are scalar multiples of each other. A vector that works is \vec{v}_2 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} . This is our second eigenvector.

Thus we have the eigenvalues and the corresponding eigenvectors for A.

A Property of Eigenvectors

In the previous example, note that there are actually infinitely many eigenvectors. For example, \vec{v}_2 = \begin{bmatrix} 2 \\ 2 \end{bmatrix} works as well. However, all the possible eigenvectors for an eigenvalue are scalar multiples of each other. We defined eigenvalues and eigenvectors by stating

 A \vec{v} = \lambda \vec{v} , where \vec{v} is the eigenvector and λ is the eigenvalue of A.

From here, we see that the vector  c\vec{v} for any scalar c is an eigenvector too.

 \begin{align} A (c\vec{v}) &= c(A \vec{v}) \\
&= c(\lambda \vec{v}) = \lambda (c \vec{v}) \end{align} .

Another way to visualize this is to derive an equation for the eigenvectors in terms of their components. Suppose our eigenvector is

 \vec{v} = \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} .

Then the equation (A - \lambda I)\vec{v} = \vec{0} can be written as

 \begin{bmatrix} a - \lambda & b \\ c & d - \lambda \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} .

Multiplying this out, we get two equations

 (a - \lambda) v_1 + bv_2 = 0 and  cv_1 + (d - \lambda) v_2 = 0 .

Since they are both equal to 0, we set them equal to each other and solve for v1 and v2.

 (a - \lambda) v_1 + bv_2 = cv_1 + (d - \lambda) v_2
 (a - \lambda) v_1 - cv_1 = (d - \lambda) v_2 - bv_2
 (a - \lambda - c) v_1 = (d - \lambda - b) v_2
 \frac{v_1}{v_2} = \frac{d - \lambda - b}{a - \lambda - c}

We have shown that for an eigenvalue λ of A, the components of its eigenvectors maintain the same ratio. Thus multiplying the eigenvectors by scalar constants will still work, since this ratio is preserved.

We have shown that the eigenvectors for an eigenvalue are actually a set of vectors. The eigenvector we write down as THE eigenvector is just a basis for this set of vectors.

Personal tools