
Re: Failing Linear Algebra:
Posted:
Apr 28, 2004 7:56 PM


In article <20040428191317.09060.00000518@mbm17.aol.com>, Anonymous wrote:
>>(a) Prove that the set M of all n by n matrices is a vector space (using >> familiar matrix addition and scalar multiplication.) > >Let set M be all nXn matrices:
[Stick to fewer than 80 columns. What you displayed was probably supposed to be a "picture" of three matrices, A B and C.]
Niggling little point: "Let ... be all nXn matrices" is a bit wrong. The thing you're defining isn't going to be a matrix, and I don't see how it can be all matrices since there are different matrices and a thing can't be both A and B if A and B are different. What you mean to say is "Let ... be the SET of all nXn matrices."
Did you know that at this level, mathematics is 90% grammar? Diagram your sentence: you have a singular subject and so you need a singular subjectcomplement; "the set" works, but "all ..." does not.
>Let s, t be scalars, elements of R. And let the above 3 matrices be called A, >B, C, respectively. > >I'd show that A + B = [deleted] >also exists in set M.
OK.
>Then show A + B (above) = B + A, which is true because (a11 + b11) = (b11 + >a11), same for all (aij + bij) = (bij + aij), since addition of scalars is >commutative.
Good. (By the way you can indicate subscripts with an underscore: write a_ij or a_{i,j} or something like that.)
>Then show (A+B) + C = A + (B+C) (associative).
Yes. No need to show me the proof; it's similar to the previous one.
>Then I'd multiply the zero matrix by A and show that it equals 0 matrix, which >is also in M.
NO! Being a vector space makes no demand that you be able to multiply two elements of M together in any way. Maybe you mean to show that multiplying A by the _scalar_ 0 gives the zero matrix.
>Multiply I^n by A, to show that I*A = A.
NO again; the key thing is to show that multiplying by the NUMBER 1 returns the matrix A. (In other words, you need an axiom to prevent silly things from slipping in under the radar as a "vector space". Many theorems would fail if you didn't insist that 1*A = A because a person could say, "Oh, here's my vector space, and my definition of scalar multiplication is that c*A means the zero matrix, no matter what c is". This is the only axiom that prevents that.)
>multiply s(A) and show that:
>is also in M.
OK
>(st)(A) = (s(tA))
This has to be verified; you haven't done it yet, but yes, it's pretty trivial too.
>And, there's the eight, right?
Um, you can't have copied the axioms right. There are different ways to present vector spaces but there have to be axioms which combine scalar multiplication with both additions (of numbers and of vectors). Something like: for all numbers c, d and all vectors v we have (c+d) v = (c v) + (d v) and another one. (P.S.  don't neglect the quantifiers "for all..."; many people do and I think it only adds to the confusion.)
You will probably need to copy your book's set of axioms to the newsgroup since it looks like they're a little different from what I think is most common. (I don't suppose it says something like, "We say that (V, +, *, 0) is a vector space if ... ", does it?)
>>What is its dimension? > >N, I guess, since it has N rows.
NO! Find a basis and count how many elements are in that basis. Of course, this will probably get us into a discussion of what a basis is...
>So, in homogenous form, there is a set of N >equations each with N variables, right?
This "homogeneous form" stuff which you've mentioned before does not really apply. You must be thinking of some specific applications of vector spaces.
>>(b) Prove that the map f(x) = x^t is a linear transformation from M to M
Apologies; I did not make this notation clear. Putting a little "t" northeast of the name of a matrix means to take its transpose. Maybe your book writes x' for this, or uses some other notation.
>For any matrix A in set M (same A as above), f(x) = x^t maps every element aij >in A to bij = (aij)^2 in matrix B.
Ack! Ptui! I hope first of all that you don't think that this is how you square a matrix. Second, neither the matrixsquaring map f(x) = x^2, nor the "Kroneckersquare" map which you just described, is a linear transformation! (New exercise: prove this!)
[wildly bogus proof deleted]
>>What is its kernel?
Try again for the transpose map. Incidentally, if you apply the definition of the kernel to the (nonlinear) map f(x) = x^2 you get nontrivial things in the "kernel" when n > 1.
>>(c) Compute the eigenvalues of f and find the eigenspaces. > >I don't know how this would work without actual numbers and an actual matrix. >Since the example is an nXn matrix, we don't know how to calculate the >determinant. We need det (Alambda*I) to find the eigenvalues and >eigenvectors.
NO! You yourself sort of gave the definition of eigenvectors in an earlier post. You did not mention determinants, nor should you have!
I will give a hint: What is f o f ? (That is, what happens if you apply f twice in a row?)
dave

