Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Matrix Proofs
Replies: 3   Last Post: May 23, 2005 9:25 AM

 Messages: [ Previous | Next ]
 Vincent Johns Posts: 89 Registered: 1/25/05
Re: Matrix Proofs
Posted: May 23, 2005 8:12 AM

dkjk@bigpond.net.au wrote:
> Hi Vincent,
>
> Vincent Johns wrote:
>

>>dkjk@bigpond.net.au wrote:
>>
>>

>>>Hi group,
>>>
>>>I need help with this proof relating
>>>to strictly upper-triangular matrices.
>>>
>>>Let A be an n x n strictly upper
>>>triangular matrix. Then the (i,j-th
>>>entry of AA = A^2 is 0 if i >= j - 1.
>>>

[...]

>
>>Another way to do the same thing is via induction:
>>
>>Minimal nontrivial case (n = 3):
>>
>>[ 0 a b ] [ 0 a b ] [ 0 0 a*c ]
>>[ 0 0 c ] * [ 0 0 c ] = [ 0 0 0 ]
>>[ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ]
>>
>>(Proposition is valid for up to 3*3 matrix)
>>
>>and, if M is a strictly upper-triangular matrix, we have
>>
>>[ b c ] [ b c ] [ g h ]
>>[ M d e ] [ M d e ] [ M*M i j ]
>>[ ... ... ] * [ ... ... ] = [ ... ... ]
>>[ f g ] [ f g ] [ p q ]
>>[ 0 0 ... 0 h ] [ 0 0 ... 0 h ] [ 0 0 ... 0 r ]
>>[ 0 0 ... 0 0 ] [ 0 0 ... 0 0 ] [ 0 0 ... 0 0 ]
>>
>>where b, c, d, etc., are possibly non-zero numbers. I wish to
>>show that p = 0 and r = 0.

>
>
> I don't quite understand your notation here. If M is a matrix, then why
> have you written it has an entry of another matrix?

It's not an entry (a single element); it's a subset including parts of
several rows and columns (maybe millions of them).

I've written it this way ecause M is a square matrix that is strictly
upper-triangular (same form as in the 3*3 matrices above), but now I've
added a couple of rows below it and a couple of columns to the right.
So, if M is a 3*3 matrix, the matrices I show here are 5*5 matrices. If
M is a 77*77 matrix, the ones shown below are 79*79 matrices, etc. So
the matrices on the left follow the pattern (= strictly upper
triangular) hypothesized in the conditions for your theorem. I'm trying
to show that the product matrix then always follows the pattern that the
theorem specifies for the product, no matter what size the matrix is.

The (strong) inductive hypothesis (see
http://mathworld.wolfram.com/PrincipleofStrongInduction.html) claims
that we already assume that for all positive dimensions k not greater
than n, a k*k matrix M satisfies your theorem. I showed that to be true
explicitly for k = 3, but I'm sure you can see that it works for k = 1
and k = 2 as well (see below).

So now, if it works for k e {1, 2, 3}, let M be a 2*2 matrix and add 2
rows and 2 columns to form a 4*4 matrix. My calculations showed that
this 4*4 matrix satisfies the theorem. So we know it works for n*n
matrices for all n <= 4. Since that's true, let M be a 3*3 strictly
upper-triangular matrix, my calculations show it works for a 5*5 matrix,
etc. After a few more steps, we see it works for 79*79 matrices (n =
79). More steps get us to n = 100,000,000. Thus we are able to prove
the theorem for all natural numbers n.

Of course, we have to start somewhere, which is the reason for the first
step of the inductive hypothesis. If it bothers you that I actually
started with a 3*3 matrix for my first step instead of a 0*0 matrix, you
could think of the inductive parameter as just being 3 less than the
dimension of my matrix M that forms the upper-left part of the larger
matrices in the second part of the inductive proof. But since we're
going to include all natural numbers anyway by the time we're done,
being off by 3 isn't important in this case. What is important is that
we not miss any numbers.

>
>>[ b c ] [ b c ] [ g h ]
>>[ M d e ] [ M d e ] [ M*M i j ]
>>[ ... ... ] * [ ... ... ] = [ ... ... ]
>>[ f g ] [ f g ] [ p q ]
>>[ 0 0 ... 0 h ] [ 0 0 ... 0 h ] [ 0 0 ... 0 r ]
>>[ 0 0 ... 0 0 ] [ 0 0 ... 0 0 ] [ 0 0 ... 0 0 ]
>>
>>where b, c, d, etc., are possibly non-zero numbers. I wish to
>>show that p = 0 and r = 0.

>
> What about every other entry on the diagonal containing p and r?

Good point. The inductive hypothesis in this case claims that we've
alreay shown that to be true for all M*M matrices with dimension up to 2
less than the dimension of this augmented matrix, so the main diagonal
of M*M (and everything below it) is all zeroes, and so is the diagonal
directly above it, but not necessarily anything above that. And I
explicitly demonstrated that for a 3*3 matrix. I suppose I should have
also done it for a 1*1 matrix and a 2*2 matrix as well, though that's
pretty easy to show. For the 1*1, the proof looks like this:

[ 0 ] * [ 0 ] = [ 0 ]

, and I imagine you can do the 2*2 with no trouble as well. (It's an

So, given that M*M in the augmented matrix already has zeroes everywhere
on the diagonal containing p and r, all I needed to do to show that the
augmented matrix also followed the same pattern (and thus satisfied the
statement of the theorem) was to show that p = 0 and r = 0 as well. By
doing that, I showed that all elements of that diagonal are zero.

[...]

>>>Also, if anyone has any clues on
>>>these related problems, it would be
>>>greatly appreciated.
>>>
>>>Suppose p is a given integer satisfying
>>>1 <= p <= n -1 and that the
>>>entries b_{kj} of an n x n matrix
>>>B satisfy b_{kj} = 0 for k >= j - p.
>>>Show that the (i,j)-th entry of the
>>>product AB is zero if i >= j - (p+1).

[...]

> For the case n = 3, p = 1
>
> [ 0 a b ] [ 0 0 d ] [ 0 0 0 ]
> [ 0 0 c ] * [ 0 0 0 ] = [ 0 0 0 ]
> [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ]
>
> For n = 3, p = 2 = n - 1
>
> [ 0 a b ] [ 0 0 0 ] [ 0 0 0 ]
> [ 0 0 c ] * [ 0 0 0 ] = [ 0 0 0 ]
> [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ]
>
> For an n x n matrix, with p = 1
>
> [ b c ] [ h i ]
> [ d e ] [ j k ]
> [ ... ... ] * [ ... ... ]
> [ 0 0 f g ] [ 0 0 0 l ]
> [ 0 0 ... 0 h ] [ 0 0 ... 0 0 ]
> [ 0 0 ... 0 0 ] [ 0 0 ... 0 0 ]
>
> =
> [ m n ]
> [ o p ]
> [ ... ... ]
> [ q r ]
> [ 0 s ]
> [ 0 0 ... 0 0 ]
> [ 0 0 ... 0 0 ]
>
> we wish to show q = s = 0. This is
> true by your previous argument. I'm
> wondering how I would show this for
> subsequent values of p.

I claim that you can prove it for 0 <= p <= n - 1 . You've already
proven it for p = 0 (1st half of the inductive proof).

Now, for the 2nd half, assume (strong induction) that it's true for all
p such that 0 <= p < r; can you show it for p = r? I think you can do
that by assuming that you have a matrix M matching the pattern for p =
(r - 1). If you append a row on the bottom and a column on the right
forming a slightly larger matrix (as I did before, but I used 2 rows &
columns) with the proper number of zeros, you should be able to show
that the product matrix will have an additional diagonal of zeros.
Having done so, and since, by induction, this would be true for any
natural number r (within limits imposed by n), you will then have taken
care of all possible cases and thus proven the theorem. (You might need
to use 2 extra rows & columns to make it obvious, but I don't think so.)

Since you've then proven it for 0 <= p <= n - 1 , it must be true for
1 <= p <= n - 1, the original proposition (assuming n > 1, which was
kind of implied in the original statement).

You should be able to do this same thing via weak induction (which
assumes only that the immediately previous case is true, instead of all
the previous cases; see
http://mathworld.wolfram.com/PrincipleofMathematicalInduction.html and
http://mathworld.wolfram.com/PrincipleofWeakInduction.html), but you
might need more details in your explanation. Even though strong
induction assumes (slightly) more to be true than weak induction does,
either one is sufficient to complete a proof for all natural numbers,
and strong induction can be more convenient to use. It's good to keep
proofs as simple as possible, assuming you cover everything that's
necessary.

I suppose I should mention that, as with other forms of writing, you
need to keep your audience in mind when writing proofs. You may be
required to use certain proof techniques so that your proof will be
understood by people who know those techniques (or to show a teacher
that you yourself know how to use them). If, for example, you were told
to use explicit summations in proving this theorem, then that's what you
need to do, instead of what I showed you. But even if so, induction is
a terrific technique for proving theorems involving natural numbers, and
it is good to be familiar with using it. If you wish, I can show you
examples of what can go wrong if you omit half of an inductive proof.

-- Vincent Johns <vjohns@alumni.caltech.edu>
Please feel free to quote anything I say here.

Date Subject Author
5/23/05 Vincent Johns
5/23/05 James Stokes
5/23/05 Vincent Johns