
Re: Failing Linear Algebra:
Posted:
Apr 29, 2004 6:02 PM


On 29 Apr 2004 20:15:19 GMT, Anonymous wrote:
>Russell: > >>Think basis vectors. What is the basis we are using here? >> >>> >>>Theta(pi/2) of (0,1) becomes (1,0). >> >>I think you've rotated in the wrong direction; > >Oops. I didn't rotate in the wrong direction. I had a total brain fart and >forgot which point comes first. Right...what I meant to say was that (1,0) >rotates to (0,1).
Correct. And since (1,0) is your first basis vector, you can write that result (0,1) as a column and that is the first column of the 2x2 matrix for the transformation, according to your basis.
> >>The vector (0,1) transforms to (1,0); > >Right. Stupid mistake. > >Still, I don't get how this result is written in matrix form. How would the >mapping theta (pi/2) be represented in matrix and basis form?
I would say in matrix form. But yes the basis does (in part) determine what the matrix will be. However, I see you are very confused about all of this, so read on.
> >>Since this yvector was our second basis vector, we >>now have the *second* column of our matrix. But we still need to find >>the first column. > >OK, so this, I guess: > >1 0 >0 1 >1 0 >0 1
Why four rows? The column is supposed to be the *result*, not starting value together with result. Also I don't see where your bottom two rows came from; they bear no relationship that I can see to the correct results you calculated above.
You stated two results, first way up above you got (0,1) for the rotation of one basis vector, and then later you got (1,0) for the rotation of the other basis vector. Now the magical technique is just to write those two results as columns: 0 1 1 0 and that is your matrix.
> >But, the second two are just multiples of the first two. So, they're >unnecessary in a basis, right? So, the basis would just be: {(1,0), (0,1)} for >a 90 degree rotation?
Oh dear no. A rotation does not have a basis  a *vector space* has a basis!
What we're doing here is finding the matrix for the rotation, and yes the matrix for a transformation does depend on what bases you are using for the respective vector spaces (which here are the same, R^2) but they are not bases of the transformation, and they are not bases of the matrix.
In this context you should not think of those rows and/or columns in the matrix as vectors themselves. (That may be appropriate elsewhere, but here it isn't a helpful view at all.) A matrix here is not some sort of list of vectors.
> >Question: if the rotation was by (pi/4) instead, would the basis have to be: >{(1,0),(radical 2/2, radical 2/2), (0,1), (radical 2/2, radical 2/2)}?
No, we can use the same basis as before; it is a property of the vector space, not of the rotation. I see that in part you grasp this, because indeed you have (1,0) and (0,1) as before. But those basis vectors should not appear *themselves* in your matrix. Also it makes no sense to write out a list; our goal is to create a matrix, which is a positional array not a list. Instead, take your two (correct, yay!) results and write them as columns: sqrt(2)/2 sqrt(2)/2 sqrt(2)/2 sqrt(2)/2) and voila, that's your matrix. You can verify that it works correctly by multiplying some column vectors by it, and seeing if that rotates them by pi/4.
> >Another area where I'm having some problems is with projection mappings and >matrices. > > >>Hopefully you have >>worked the matrix out in your head by now, so I'll write it down: >> >> 0 1 >> 1 0 >> >>That is, the first column is a unit in the y direction, and the second >>column is a unit in the x direction. > >Why would the first xvalue be negative though?
Because as you said, your basis vector (0,1) rotated to (1,0). So 1 is in the first row of the second column, and 0 is the second row of the second column.
By the way, I called this a magical technique but there's really no magic; you should be able to prove why the technique works. However, no need to do that quite yet, till you get over some of your basic confusions.
> >>Let's see if this matrix >>works for some arbitrary vector, say, (2,1). Express that vector as a >>column vector, multiply by our matrix, and you get the answer (1,2) >>expressed as a column vector, right? > >0 1 >1 0 > >times (2, 1) = > >(0 1 , 2  0) = (1, 2). Got it! > >So this 90 degree rotation basis has to work for ANY unit vector then, no >matter the direction it points in?
Not just unit vectors, *any* vector in R^2. That's the point. A linear transformation takes any vector in the space to a particular vector in the image space; if you want your matrix to express this transformation, it had better work for any vector.
> >>Plot the two points (2,1) and >>(1,2) on some graph paper and draw in the lines from the origin; what >>angle do you see? > >Yup, I can visualize that. They're 90 degrees apart. Wait...something else >just hit me. (1, 2) isn't a unit vector. How come our 90 degree rotation >basis works anyways? Would it work for a vector of any length?
There you go again; rotations don't have a basis. You need to get over this confusion fast. Yes it works for vectors of any length.
The way that basis comes in is more subtle. If we were using a basis other than (1,0),(0,1) then the vector which we had written here as (2,1) would have different coordinates, and so would the vector (1,2). The transformation (i.e. the abstract mapping) doesn't care what coordinates you give your vectors; all it cares about is that it takes arrow A on your graph paper to arrow B rotated 90 degrees. So if you've given your two arrows different coordinates (by changing your basis) you also need to change your matrix so that arrow A still transforms to arrow B via the matrix multiplication; if not, you will not be describing the same transformation.
>>In fact >>one of the things you may be asked to do is, given a matrix in one >>basis, compute the matrix for the same linear transformation in a >>different basis. I won't go into that here; it's in all of your books >>and hopefully you have enough insight now to be able to understand >>what they are talking about, when you read the relevant passages. > >I think I just learned how to do that yesterday. You convert the matrix into >the identity matrix, and whatever has happened to the original identity matrix >as a result of the conversions equals the inverse of the original matrix. >I.E., you convert A into A^(1) using I. You can check that A*A^(1) = I, and >then you know you're right.
The matrix that you need to invert here, to change bases, is *not* the matrix for the linear transformation. It's a different matrix  the matrix of transition.
(That said, yes, you've stated a good technique for calculating the inverse of a matrix in general; you may well have to do that on your exam. But that's in the realm of calculation, and we should probably be focusing on just the abstractions here, to avoid getting sidetracked.)
> >With A^(1), you multiply one side of the standard basis by A^(1) and the >other side by A, and that gives you basis alpha in terms of basis beta.
No, there you go again. The thing in the middle is *not* a basis, it's a matrix. It's the matrix we've been talking about all along here, the matrix for the transformation.
Right? > I'm a little hazy here, I think. But do I have the geneal idea down?
You're making some progress, but remember  the general idea is not good enough. In math, a miss is as bad as a mile.

