Matrix

From Math Images

Revision as of 07:25, 8 August 2011 by Christaranta (Talk | contribs)
Jump to: navigation, search
This is a Helper Page for:
Blue Fern
Summation Notation
Change of Coordinate Systems
Math for Computer Graphics and Computer Vision


A matrix is a rectangular array of real numbers. A matrix may simply be used to store numbers for later access, in which case we may call it an array, but we’re more interested in matrices that describe mathematical operations.

A matrix has one or more dimensions.

  • A one-dimensional matrix looks identical to a vector, and its dimension would be the same as its length. A shorthand notation for a one-dimensional matrix is [a_i ]
  • A two-dimensional matrix looks like
 \begin{bmatrix}
          3 & -3 \\
          2 & 5 \\
          -1 & 6 
         \end{bmatrix}

and its dimension is the pair of numbers that describes its number of rows and columns. So in this example, the dimension would be 3x2 – three rows and two columns. A shorthand notation for a two-dimensional matrix is [a_{ij}] where i is the row number and j is the column number. So in this example, a31 = -1.

  • A three-dimensional matrix looks like ... well, it looks like several two-dimensional matrices back-to-back, similar to pages of a book. Its dimension is the triple of numbers that describes its number of rows, columns, and pages. Three-dimensional matrices are not widely used in computer graphics, but an example is a three-dimensional texture. We won’t go into these further here.

Contents

Matrix Operations

There are three fundamental matrix operations: addition of matrices, scalar multiplication of matrices, and matrix multiplication. Most of our definitions are given for two-dimensional matrices but are generally easy to extend to other dimensions.

Matrix Addition

If two matrices  A = [a_{ij}] and  B = [b_{ij}] have the same dimension, then the matrix sum is defined as A + B = [a_{ij} + b_{ij}]. That is, each entry in the sum is the sum of the corresponding entries in the separate matrices. For example,

 \begin{bmatrix}
          3 & 7 & 1 \\
          -1 & 0 & 6 \\
          -4 & 3 & 2
         \end{bmatrix}
+        \begin{bmatrix}
          4 & 5 & -4 \\
          -4 & 8 & -5 \\
          -1 & 6 & 7
         \end{bmatrix}
=        \begin{bmatrix}
          7 & 12 & -3 \\
          -5 & 8 & 1 \\
          -5 & 9 & 9
         \end{bmatrix}

Scalar Multiplication

For a matrix  A = [a_{ij}] and a scalar value x, the scalar product xA is  Ax = [xa_{ij}]. For example,

 2 \times \begin{bmatrix} 
                   2 & 4 \\
                   0 & -0.3
                  \end{bmatrix} 
=                 \begin{bmatrix} 
                   4 & 8 \\
                   0 & -0.6
                  \end{bmatrix}

Matrix Multiplication

If matrix A has the same same second dimension as the first dimension of matrix B, then we can multiply the matrices. To be more precise, if A is a  m \times n matrix and B is a n \times k matrix, then the matrix product AB is a m \times k matrix. This matrix product is defined by defining each of its component: AB_{ij} is the dot product of the ith row of A and the jth column of B. Because of the requirement on the dimensions of the matrices, these two vectors have the same size, so the dot products make sense. But numbers make even more sense, so let’s consider the two matrices

A = \begin{bmatrix}
             1& 3 & -4\\
             0 & 2 & -0.5
            \end{bmatrix} \text{and }
        B = \begin{bmatrix}
             2 & -1 \\
             5 & 0\\
             -1 & 3
            \end{bmatrix}

Now A has dimension 2x3 and B has dimension 3x2, so AB will have dimension 2x2. The entries of AB are the dot products of the two rows of A and the two columns of B, as

\begin{align}
        AB_{11} &= \langle 1,3,-4 \rangle  \centerdot \langle 2,5,-1 \rangle = 2 + 15 + 4 = 21 \\
        AB_{12} &= \langle 1,3,-4 \rangle  \centerdot \langle -1,0,3 \rangle = -1 + 0 + -12 = -13 \\
        AB_{21} &= \langle 0,2,-0.5 \rangle  \centerdot \langle 2,5,-1 \rangle = 0 + 10 + 0.5 = 10.5 \\
        AB_{22} &= \langle 0,2,-0.5 \rangle  \centerdot \langle -1,0,3 \rangle = 0 + 0 + -1.5 = -1.5 \\
        \end{align}

so the final product is

 AB = \begin{bmatrix}
               21 & -13 \\
               10.5 & -1.5
              \end{bmatrix}

This animation illustrates the process of matrix multiplication:


The most interesting matrices for computer graphics are square two-dimensional matrices, either 2x2, 3x3, or 4x4. When you multiply two square matrices you get another matrix of the same size, so matrices form an algebra as studied in an abstract algebra course.

There are two special matrices. The zero matrix is a square matrix whose entries are all zero. If we call this Z, then clearly Z+A = A+Z = A for all matrices A for which this makes sense, so the zero matrix behaves like the number 0 for addition. The identity matrix is a square matrix that has aii=1, but aij=0 if i≠j. That is, the identity matrix is
 I = \begin{bmatrix}
              1 & 0 & 0 \\
              0 & 1 & 0 \\
              0 & 0 & 1
             \end{bmatrix}
for the 3x3 case. You should check that this matrix has the property that AI = IA = A for all square matrices A of the right size. Thus the identity matrix behaves like the number 1 for multiplication.

Matrix multiplication has one other interesting property: it is not commutative. Generally, AB ≠ BA for matrices. You can quickly see this for the matrices A and B above: while AB is a 2x2 matrix, BA is a 3x3 matrix.

Matrix Transposition

There is one other matrix operation that you might sometimes see. Matrix transposition replaces each row of a matrix with the corresponding column of the same matrix. This is written AT and is defined by A_{ij}^T = A_{ji}. To make this a little clearer, look at the example

 A = \begin{bmatrix}
              1 & 6 & -4 \\
              -8 &-2 & 9 \\
              5 & 7 & 3
             \end{bmatrix} \text{ and }
       A^T = \begin{bmatrix}
              1 & -8 & 5 \\
              6 & -2 & 7 \\
              -4 & 9 & 3
             \end{bmatrix}

Matrices Are Functions

The most important reasons to work with matrices is that matrices represent functions from one Cartesian space to another. We can see this if we think of a point as a vector, and then of a vector as a matrix. We have written the vector  V = \langle 3,-2,5 \rangle as  V = \begin{bmatrix} 3 \\ -2 \\ 5 \end{bmatrix} , and in this form it is a 3x1 matrix. Then we can multiply a 3x3 matrix times V and get another 3x1 matrix as a result. For example,

if   \begin{bmatrix}
              1 & 6 & -4 \\
              -8 & -2 & 9 \\
              5 & 7 & 3
             \end{bmatrix} and  V = \begin{bmatrix} 3 \\ -2 \\ 5 \end{bmatrix} , then  AV = \begin{bmatrix}
             3 -12 -20 \\
             -24 + 4 + 45 \\
             15 - 14 + 15
            \end{bmatrix} 
= \begin{bmatrix} -29 \\ 25 \\ 16 \end{bmatrix}

So the matrix A represents a function on 3D Cartesian space. It is simple to see that matrix multiplication has the properties we would expect a set of functions to have: they are associative, are linear (that is, if V = V1 + V2, then AV = AV1 + AV2), and has an identity I. In the later pages on transformations we will see exactly how we use matrices as functions in computer graphics.


References

Page written by Steve Cunningham.

Additional Information

Another explanation of matrix operations: http://www.miislita.com/information-retrieval-tutorial/matrix-tutorial-2-matrix-operations.html

Personal tools