
Re: Failing Linear Algebra:
Posted:
Apr 27, 2004 8:24 PM


On 27 Apr 2004 12:13:44 0700, Anonymous wrote:
>rusin@vesuvius.math.niu.edu (Dave Rusin) wrote in message news:<c69sdb$oi7$1@news.math.niu.edu>... >> In article <c98b1ba0.0404221444.4623535e@posting.google.com>, >> Anonymous wrote: >> >I'm currently a math major and am taking linear algebra, but I'm in >> >serious danger of failing. I just don't get it! Is this newsgroup a >> >place to come to ask questions and get information about learning >> >math? Or is there somewhere more appropriate to go? I've always had >> >trouble with vectors, and I think I fell apart sort of right at the >> >beginning of linear algebra (although, I did manage to get a B on the >> >very first exam). I've got another exam next week. What can I do? I >> >don't get all the terms, concepts, and jargon. Anyone know how to >> >make learning linear algebra easier and more practical? Anyone got >> >any practice problems? >> >> I would bet that the single practice problem you need to work on is, >> "What is a vector space?" >> >> Our students "do well on the very first exam" because that's the part >> of the course where we warm up with techniques for solving linear >> systems of equations and such topics. But our LA course is also our >> students' first course in which abstractions, axioms, and proofs play >> a significant role. They often stumble because (among other problems) >> they don't realize they need to _memorize_ definitions _precisely_. >> So you can do a little selfassessment here to figure out whether >> what you're missing is bits of topics or the core idea: can you, >> right this minute, define what a vector space is? >> >> dave > >Dave, this may be my problem. I did decent on the first exam because, >like you said, it was solving linear systems, echelon form, linear >dependenceseasier stuff like that. I've always been a little >confused with the concepts, but I think I may finally be getting a >grip on what a vector space is: > >It's a group of vectors that can be multiplied by any scalar and/or >added together in any way,
Not just any way  there are precise conditions that the sum must satisfy. Look for the axiomatic definition in your book(s). It's not at all difficult to memorize. Do it.
and whatever possible combinations that can >result is the "vector space" for that group of vectors. This is how I >understand it. For vectors in R^2, a plane is formed ("spanned"???) >by the vector space. For vectors in R^3, a solid area is formed by >the vector space. It gets difficult for me to move into dimension 4.
Geometric examples are often helpful, but use them to *liberate* yourself, not tie yourself down. If you are faced with a vector in R^4 on your exam, don't bother trying to visualize it. You can do everything you need to do algebraically.
Personally, I keep a few geometric examples from R^2 in my head, and don't bother with anything more complicated unless the problem happens to be specifically about geometry, which won't be the case on your exam this week. E.g. the matrix for a 90degree rotation in R^2, in the usual basis, is useful to remember  or even better, you might learn how to derive it quickly on the fly  because if (say) you find yourself confused about how basis vectors transform, you have a ready example to check out and remind yourself. Other 2x2 matrices for simple rotations and reflections (and perhaps more) are good to know, but not for their own sakes, but rather because they help you think clearly about the abstractions (and only if they truly do). >While I understand that the same concepts hold, there's no more >physical picture I can use to visualize what's happening. Is my >understanding of "vector space" sufficient enough? Am I missing >anything?
The algebra is the important thing, not the picture. And your current understanding of the algebra is insufficient to keep you from getting confused on the upcoming test. You need to learn the definitions precisely.
> >I know I am still struggling with the concepts of "span" and "basis". >The weird thing is that I'm alright with the more advanced stuff; >matrices, determinants, eigenvalues, eigenvectors. I'm a little hazy >with diagonalization because it's the newest thing we've done. I know >it's got something to do with the eigenvalues of a special type of >matrix. > >I guess I could also really use some help with understanding how a >mapping gets converted into a matrix, and then how to solve it.
I like the term "linear transformation" and I think you should use it too; a "mapping" usually means something more general that may not have the necessary restrictions. Definition time again! Do you know what restrictions I'm talking about? f(a+b) = f(a)+f(b) and f(ca) = cf(a) of course. That, by definition, is what makes the mapping *linear*.
Anyhow, you ask about how a linear transformation is converted into a matrix. The l.t. is the abstract thing; the matrix is one particular representation of it (in terms of two bases). Change one or both of the bases, and you get a different matrix for the same linear transformation. It's somewhat analogous to the way you can write the same abstract number (say, the cube of two) as 8 in decimal, or 10 in octal, or 1000 in binary. But better than that, I think, because all of the important features of the linear transformation have counterparts in matrix theory; i.e. whatever matrix you end up using, it will share some important features with the transformation, e.g. it will have the same eigenvalues, same rank, etc.
(So, one can say that the set of nxm matrices is isomorphic to the set of linear transformations from F^n to F^m. Indeed this is an isomorphism in the strict sense I give below, because both these sets are vector spaces in their own right! But don't worry about that if you find it confusing.)
In answer to your question, you aren't *really* converting the transformation into a matrix  they are two different things  but for most practical purposes you can conveniently ignore that fine distinction
OTOH if what you're asking is how (by what method) to do this conversion, that actually is quite easy. Take basis vector #1, apply the transformation to it componentwise to get the components of the transformed vector, and write those components in a column. Do the same with basis vector #2, writing it as a column to the right of the one you already wrote. And so on, until you're done. There's your matrix. Try working it out for the 90degree rotation I mentioned above, and then try your matrix out on some 2D column vectors to see if they really do turn 90 degrees when you multiply by the matrix.
(I am assuming you multiply with matrix on the left and column vector on the right, as is done in Schaum's outline; hopefully that is how your prof does it too, otherwise swap positions and take the transposes!)
>I >understand matrix multiplication and can do it well. But the concepts >of image, kernel, and isomorphism and how they relate to the >mappings/matrices seem to be lost on me. The odd thing is that I >fully understand the definitions of "kernel" and "image" as they were >applied in algebraic structures, but I don't get how they apply to >linear really. "Isomorphism" is a concept I never understood in >algebraic structures or linear algebra.
If you get it for algebraic structures in general, you are ahead of the game. In the context of linear algebra, an isomorphism is simply a linear transformation (remember the definition?) that has an inverse. Kernel and image have their usual meaning.
(Isomorphisms come up a lot in math; see above. In general they tell you when it's OK to substitute one structure by another, without losing any essential information.)
One nice thing about linear algebra is that once you have a matrix for a linear transformation, you can answer questions e.g. about the l.t.'s kernel and image very easily, simply by manipulating the matrix, without having to wrestle with the abstraction. If the kernel is {0}, i.e. the set of rows (or the set of columns) of the matrix is linearly independent, and the matrix is square, then the matrix is invertible and you have an isomorphism. Otherwise, one or more rows or columns is a linear combination of the others; and that means there are nonzero vectors which, when multiplied by your matrix, give zero. In other words, those vectors are members of the kernel of your linear transformation. See how it all fits together?
Btw you asked elsewhere what an eigenvalue is. The definition is simplicity itself, learn it! If your question is really, what are they good for, one answer is that they come up a lot in differential equations. A part of math that linear algebra has a lot to say about.
> >Thanks to the people who posted book suggestions, but I'm hesitant >about buying any other books. I already bought the text, the Cliff's >Notes guide to linear algebra, a 2003version of the Schaum's outline, >and I even have an old 1968 version of Schaum's that my grandmother >used when she majored in math. Cliff's has been helpful, but too >basic. Schaum's seems almost too advanced; it's great that they solve >all the problems, but sometimes the explanations are lacking. I find >that I do much better at math problems if I can first figure out how >to solve a certain type of problem and then go back and try to >understand the concepts behind it, rather than the other way around. >Schaum's examples don't allow for this, because they assume you've >already read (and understood) the concepts behind how to solve certain >problems.
Yes, I think you have enough books. Personally, I like the Schaums Outline (the standard one by Lipschutz) a lot, by the way. As one who easily gets confused myself, I can say it was the book that made the quickest sense to me, i.e. had the smallest eyesglazeover effect. OTOH, I did have the benefit of having read and worked on some other, more abstract books beforehand. Btw I found nothing objectionable about Lipschutz from an abstract point of view; if he made it any simpler he would mislead, but he treads that fine line very nicely IMHO.
> >Maybe, if it isn't too much to ask, would anyone here be willing to >post some problems relating to mappings/kernel/image/isomorphims >and/or eigenvalues/eigenvectors, and I can attempt to solve them with >your help?
Learn the definitions first, vet them with some examples and counterexamples as some others have recommended, and then focus on learning some important techniques cold, and with luck you can ace the test (and be a math major too, and even be happy about it someday). Usenet will probably be too slow for your needs this week; who wants to type in those matrices in ASCII anyway.

