Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Failing Linear Algebra:
Replies: 91   Last Post: Jan 10, 2007 12:56 PM

 Messages: [ Previous | Next ]
 Dave Rusin Posts: 3,118 Registered: 12/6/04
Re: Failing Linear Algebra:
Posted: Apr 28, 2004 7:56 PM

In article <20040428191317.09060.00000518@mb-m17.aol.com>,
Anonymous wrote:

>>(a) Prove that the set M of all n by n matrices is a vector space (using
>> familiar matrix addition and scalar multiplication.)

>
>Let set M be all nXn matrices:

[Stick to fewer than 80 columns. What you displayed was probably supposed
to be a "picture" of three matrices, A B and C.]

Niggling little point: "Let ... be all nXn matrices" is a bit wrong.
The thing you're defining isn't going to be a matrix, and I don't see how
it can be all matrices since there are different matrices and a thing
can't be both A and B if A and B are different. What you mean to say is
"Let ... be the SET of all nXn matrices."

Did you know that at this level, mathematics is 90% grammar? Diagram
your sentence: you have a singular subject and so you need a singular
subject-complement; "the set" works, but "all ..." does not.

>Let s, t be scalars, elements of R. And let the above 3 matrices be called A,
>B, C, respectively.
>
>I'd show that A + B =

[deleted]
>also exists in set M.

OK.

>Then show A + B (above) = B + A, which is true because (a11 + b11) = (b11 +
>a11), same for all (aij + bij) = (bij + aij), since addition of scalars is
>commutative.

Good. (By the way you can indicate subscripts with an underscore: write
a_ij or a_{i,j} or something like that.)

>Then show (A+B) + C = A + (B+C) (associative).

Yes. No need to show me the proof; it's similar to the previous one.

>Then I'd multiply the zero matrix by A and show that it equals 0 matrix, which
>is also in M.

NO! Being a vector space makes no demand that you be able to multiply two
elements of M together in any way. Maybe you mean to show that multiplying
A by the _scalar_ 0 gives the zero matrix.

>Multiply I^n by A, to show that I*A = A.

NO again; the key thing is to show that multiplying by the NUMBER 1 returns
the matrix A. (In other words, you need an axiom to prevent silly things
from slipping in under the radar as a "vector space". Many theorems would
fail if you didn't insist that 1*A = A because a person could say, "Oh,
here's my vector space, and my definition of scalar multiplication is
that c*A means the zero matrix, no matter what c is". This is the only
axiom that prevents that.)

>multiply s(A) and show that:

>is also in M.

OK

>(st)(A) = (s(tA))

This has to be verified; you haven't done it yet, but yes, it's pretty
trivial too.

>And, there's the eight, right?

Um, you can't have copied the axioms right. There are different ways to
present vector spaces but there have to be axioms which combine scalar
multiplication with both additions (of numbers and of vectors). Something
like:
for all numbers c, d and all vectors v we have (c+d) v = (c v) + (d v)
and another one. (P.S. -- don't neglect the quantifiers "for all...";
many people do and I think it only adds to the confusion.)

You will probably need to copy your book's set of axioms to the newsgroup
since it looks like they're a little different from what I think is
most common. (I don't suppose it says something like, "We say that
(V, +, *, 0) is a vector space if ... ", does it?)

>>What is its dimension?
>
>N, I guess, since it has N rows.

NO! Find a basis and count how many elements are in that basis.
Of course, this will probably get us into a discussion of what a basis is...

>So, in homogenous form, there is a set of N
>equations each with N variables, right?

This "homogeneous form" stuff which you've mentioned before does not
really apply. You must be thinking of some specific applications of
vector spaces.

>>(b) Prove that the map f(x) = x^t is a linear transformation from M to M

Apologies; I did not make this notation clear. Putting a little "t" northeast
of the name of a matrix means to take its transpose. Maybe your book writes
x' for this, or uses some other notation.

>For any matrix A in set M (same A as above), f(x) = x^t maps every element aij
>in A to bij = (aij)^2 in matrix B.

Ack! Ptui! I hope first of all that you don't think that this is how you
square a matrix. Second, neither the matrix-squaring map f(x) = x^2,
nor the "Kronecker-square" map which you just described, is a linear
transformation! (New exercise: prove this!)

[wildly bogus proof deleted]

>>What is its kernel?

Try again for the transpose map.
Incidentally, if you apply the definition of the kernel to the (nonlinear)
map f(x) = x^2 you get non-trivial things in the "kernel" when n > 1.

>>(c) Compute the eigenvalues of f and find the eigenspaces.
>
>I don't know how this would work without actual numbers and an actual matrix.
>Since the example is an nXn matrix, we don't know how to calculate the
>determinant. We need det (A-lambda*I) to find the eigenvalues and
>eigenvectors.

NO! You yourself sort of gave the definition of eigenvectors in an
earlier post. You did not mention determinants, nor should you have!

I will give a hint: What is f o f ? (That is, what happens if you apply
f twice in a row?)

dave

Date Subject Author
4/24/04 Daniel Grubb
4/24/04 Marc Olschok
4/24/04 Daniel Grubb
4/24/04 Marc Olschok
4/24/04 Daniel Grubb
4/24/04 Thomas Nordhaus
4/24/04 Dave Rusin
4/25/04 Jonathan Miller
4/25/04 Felix Goldberg
4/24/04 Daniel Grubb
4/28/04 Tim Mellor
4/28/04 James Dolan
4/28/04 Daniel Grubb
4/28/04 James Dolan
4/28/04 Daniel Grubb
4/28/04 gersh@bialer.com
4/29/04 Daniel Grubb
4/29/04 Dave Rusin
4/28/04 Guest
4/29/04 Guest
4/28/04 Guest
1/10/07 David C. Ullrich
4/29/04 Dave Rusin
4/28/04 Guest
1/10/07 Law Hiu Chung
1/10/07 Dave Seaman
1/10/07 Marc Olschok
1/10/07 George Cox
4/28/04 Guest
1/10/07 Dave Rusin
4/28/04 Lee Rudolph
4/28/04 Guest
4/28/04 Guest
1/10/07 Marc Olschok
1/10/07 Toni Lassila
4/29/04 Guest
1/10/07 M L
1/10/07 Thomas Nordhaus
4/30/04 Guest
1/10/07 David C. Ullrich
1/10/07 Toni Lassila
4/30/04 Guest
1/10/07 George Cox
1/10/07 Marc Olschok
4/30/04 Guest
4/30/04 Guest
4/27/04 Guest
1/10/07 Thomas Nordhaus
1/10/07 David C. Ullrich
1/10/07 Dave Rusin
1/10/07 David C. Ullrich
5/9/04 James Dolan
5/10/04 David C. Ullrich
5/10/04 James Dolan
5/10/04 David C. Ullrich
5/10/04 Marc Olschok
5/10/04 David C. Ullrich
4/27/04 Guest
1/10/07 Thomas Nordhaus
4/27/04 Guest
1/10/07 magidin@math.berkeley.edu
1/10/07 David C. Ullrich
1/10/07 Marc Olschok
1/10/07 David C. Ullrich
1/10/07 Tim Mellor
4/28/04 Daniel Grubb
4/28/04 Daniel Grubb
4/27/04 Guest
1/10/07 David C. Ullrich
4/28/04 Dave Rusin
4/28/04 Daniel Grubb
4/27/04 Guest
1/10/07 Marc Olschok
4/24/04 Wayne Brown
4/24/04 Thomas Nordhaus
4/24/04 David Ames