
Re: Failing Linear Algebra:
Posted:
Apr 30, 2004 11:32 PM


On 01 May 2004 01:31:35 GMT, Anonymous wrote:
>Russell: > >>Another thing I didn't mention is that it's arbitrary which variables >>you choose to be free, and which "basic" as you call them. That is, >>you could rewrite the same equations with (say) the terms for x6 in >>the first column and those for x1 in the 6th column. Then x6 would be >>basic and x1 would be free; your results will look different >>numerically but they will turn out to work equally well in the >>equations. > >Right. I understand why this is. Just as if you have x+y = 10, you can choose >to vary x OR y. The standard we've been using in class is to work left to >right, either in alphabetical order or numerical order. So, I think it's >obvious to most of us that x6 *could* be listed as a free variable while x1 is >listed as basic, but what's the point is rearranging the terms within the >equation? It's just something we've never bothered doing. But, like I said, I >think it's fairly obvious to everyone in my class that it doesn't matter which >variables are free and which are basic. >
[snippage by Anonymous]
>>Basis of the *system*? That is very confused terminology. Drum into >>your head the idea that a basis is something that applies to a *vector >>space*. And nothing else. (in linear algebra, that is.) > >Maybe I meant to say *span* of the system? >
[snippage by Anonymous]
>>Get used to thinking about matrices etc as abstract objects >>in their own right, not just some weird technique for solving systems >>of equations. > >Right. I think I understand this. My professor explained that the matrix >notation is really just a handy way of organizing the notation and keeping the >terms all alligned.
But I think you missed my point. My point (part of which you snipped) was that it isn't *just* that. Yes is "just that" in the application of solving simultaneous equations, but that is by no means the only application of linear algebra or of matrices in particular. Remember when we rotated one line on the graph paper to another line? To me *that* is the classic application of matrices. But probably I'm showing my bias here; in any case there's way more than one application.
[snippage]
> >>Hint: in your example below, which very important vector is not in the >>set of solutions? > >The 0 vector?
Yes.
> >>Anyhow, your vector doesn't work in a span; try multiplying it by >>2 and see if it satisfies your system. You need to subtract the >>constants out (make it homogeneous) so I think you should have >> (6, 0, 1, 3, 1, 0, 0). > >OK, right. I think the technique I used can only be used in a homogeneous >system.
Well not exactly; you use it on the homogeneous system (to get the kernel) and then you add in the inhomogenous part at the end, after you're done, to get the solution for the inhomogeneous equation.
> >>My vectors are a basis  of the kernel, not of the solution set >>(which has no basis). Yours are not a basis of any space relevant to >>the problem, AFAICS. > >OK, mine don't work because my system wasn't homogeneous. I'm still not really >sure *why* your three vectors would be a basis for the kernel though. Their >span would produce all equations that equal 0? Is that where it comes from?
Yes. (Assuming you mean all equations with the given matrix.) Try a linear combination of my basis vectors and see if you can find an exception. If so, I did something wrong, but I believe they are right. Of course one linear combination is the zero vector, so you see right away that this technique can never work with an inhomogeneous equation.
> >>The solution set is the set of all vectors of form (5,8,1,10,0,0,0)+V >>where V is a member of the span of my three vectors. > >OK, and that solution is the entire vector space, right?
No, why would you think that? It takes a minimum of 7 vectors to span your whole space, and we only have three (plus one additional constant vector that we don't multiply by a coefficient).
So, in this case, dim >Im = 4 (the values 5, 8, 1, 10) and dim Ker = the 3 zero values, right? 4 + 3 >= 7, which makes sense because we're in R^7? Is that, in essence, the >Dimension Theorem?
The numbers 4, 3, and 4+3=7 are right, and yes that is the Dimension Theorem, but damned if I know what you mean by these "values" you speak of. And I really mean I don't understand you there, I'm not just trying to be picky.
Here's how I would put it. (One of several ways possible, all pretty much the same.) You have a matrix 1 0 0 0 6 0 7 0 1 0 0 0 4 0 0 0 1 0 1 8 0 0 0 0 1 3 0 1 which represents a linear transformation R^7 > R^4. The rank of this matrix is 4, which means the transformation is a surjection onto R^4. (I.e. the image space *is* R^4.) One of the vectors in the image space is (5,8,1,10). That one vector is the image of certain vectors x in R^7  those vectors are a subset of R^7. I have called this subset the solution set. There happen to be many, many vectors in this set but it is by no means all of R^7. Another vector in the image space R^4 is (0,0,0,0) and similarly, it is the image of a certain *different* subset of vectors in R^7. (Indeed every vector in R^4 is the image of a different, mutually disjoint subset of R^7.) The subset whose image is (0,0,0,0) is called the kernel of the linear transformation, and it is a subset with a little "something extra", because it is a *subspace* of R^7, i.e. it is itself a vector space.
Looking at the kernel more closely (and doing some calculations) we find that three linearly independent vectors k1, k2, and k3 (as given in my earlier post) are in the set that maps to (0,0,0,0), and so is any linear combination of those three; but there are no others. That means, the kernel has dimension 3.
I'll go on to say something more, which I hope you won't find confusing. You have noted that my kernel's basis vectors have 7 coordinates, and indeed they are the basis vectors for a vector space, but that vector space is *not* R^7. It is a subset of R^7. Since you like to think geometrically, I will say that it is a 3dimensional hyperplane through the origin of the domain space R^7, and let you think about what that means. Btw a much easier example is the one we worked out below, where the kernel is a 1dimensional "hyperplane" (i.e. line) through the origin of the domain space R^2. Since k1,k2,k3 is a basis for our kernel, an arbitrary vector a in the kernel can be written as a1k1 + a2k2 + a3k3 and so we could, if we like, set up a coordinate system in which a would be represented by the 3tuple (a1, a2, a3). So now our kernel begins to look like a nice familiar vector space, R^3, doesn't it? Just don't confuse the coordinates according to this basis with the coordinates according to our old basis in R^7. For example, the vector (12,0,2,6,2,0,0) is a member of the kernel, and its coordinates in our basis {k1,k2,k3} are (2,0,0), and I emphasize, both the 7tuple and the 3tuple represent the *same* vector. If you don't see where (2,0,0) came from, look above; k1 is one of the basis vectors in my last post, the only one you didn't snip.
If you like the idea of the hyperplane, then here's something else: the solution set for (5,8,1,10) is in fact another hyperplane that is parallel to the kernel and offset from the origin by the vector (5,8,1,10,0,0,0). Those nice numbers come from the fact that your matrix is in rowechelon form; but nothing else that I have said really depends on that; kernels in *every* case pass through the origin (why?) and inhomogeneous solution sets are always hyperplanes parallel to the kernel. You will run into this again when you get to quotient spaces, consider this a warmup.
> >> >>With three unconstrained variables that you can set to anything, the >>point is that you have a whole 3dimensional space of vectors that >>will "hold" in your homogeneous equation. That's a lot more than what >>you say, with the free variables all set to zero (but yes that does >>happen to be a solution too). > >So, no matter what those last three free variables are, the system of equations >still holds. But, in algebraic structures, the kernel was the set of all >things that map to zero. Doesn't the dimension of three imply that there are >three things, or three variables, that map to 0?
See above. It means that there is a threedimensional vector space just brimming with vectors, all of which map to 0. A 3D hyperplane in R^7, if you like that idea.
> >>You tell me. How many equations do you have in how many unknowns? > >One equation, two unknowns. > >>How many rows will there be in your matrix if you write this as a >>matrix equation, and how many columns? > >One row, two columns.
Right. So I think you followed me in the calculation, right?
> >>Not sure what to say here, are you bothered by abstract notation like >>{e_1, e_2, e_3} being a basis for some 3dimensional vector space? > >I think I'm OK with that notation, because I understand that to be the >{(1,0,0), (0,1,0), (0,01)} form, so that e1 is (1,0,0), e2 is (0,1,0), and e3 >is (0,0,1), and together they form the "standard" basis for R^3. If I'm right >about everything I've just said, then I'm fine with that "e" notation for the >standard bases. > >>Or >>is the problem that you can't seem to grasp bases in R^n other than >>the standard one? > >Right. That's the problem. I'm just getting how to change from one basis to >another, but I really don't understand how bases other than the standard ones >are useful.
See above for an example. We could not use a subset of the standard basis of R^7 as a basis for our kernel; for example none of the vectors (0,0,0,0,1,0,0),(0,0,0,0,0,1,0),(0,0,0,0,0,0,1) is a solution to our homogeneous equation. So we needed a weird set {k1,k2,k3} instead. But it turns out, they span a vector space, don't they?

