> On Sun, Sep 2, 2012 at 7:50 AM, kirby urner <firstname.lastname@example.org> wrote: >> Any of these repeated-addition definitions for product ab does not >> work in a field, especially for the set of all real numbers but even >> for the set of all rational numbers, unless we restrict the >> multiplication such that a or b is an integer. >> > > Are we going to dive into this yet again? Maybe so, why not. > > I'm happy to concede the "repeated addition" meme makes plenty of > sense in many a field, thanks to the distributive property, including > Q and R.
Let's generalize this all the way from a field to a ringoid, so we can really see what's going on. A ringoid is simply a set under two binary operations such that they are "connected" with a distributive property, one of these operations distributing over the other. *By convention*, we call the one being distributed over "addition" and the other one "multiplication".
Why do I say "*by convention*"? I say it because we could have easily called the one being distributed over "multiplication", in which case we would be talking about addition being repeated multiplication via the distributive property.
But we can already talk about addition being repeated multiplication via the distributive property. That is, is it not the case that there are sets (like in a Boolean algebra) such that we call the two binary operations addition and multiplication but we have two distributive properties such that each of these operations distributes over the other?
This phenomenon in any set with the distributive property going in both directions is what makes it so that in these sets, we can talk about either operation in terms of "repeating" the other.
And so we see that this idea that most generally, this idea that the operation we by convention call "multiplication" *must be* seen to be a repeating of the operation we by convention call "addition" is a myth.
To underscore this last point: What if we had a set under two binary operations but no distributive property? Could we then still talk of one of these operations
One final note on this before I go on:
If all we have is a set under two binary operations such that at least one of them distributes over the other, this "repeating" can happen on only two elements. We need an associative property over the operation being distributed over before can talk of this "repeated" application of the operation being distributed over happening as many times as we want, say n times. And we need an identity for the operation that distributes and we need one of the factors being a sum of n of these identities so that the result via the distributive property is simply n instances of the other factor added up.
And so we see that one binary operation being a "repeated" application of the other binary operation is simply an artifact of certain algebraic properties imposed on two binary operations on a set, no matter what we call those two operations.
> > (a/b)(c/d) = a (bc/d) so you can always isolate an integer and then > say you're adding (bc/d) to itself a times. >
But this is a serious redefinition of viewing "multiplication" as "repeated addition".
By the certain algebraic properties in the above, in the naturals and the integers a binary product of two factors can be viewed such that the number of instances of one of the factors to be added up is equal to the other factor.
But in fraction multiplication with each factor a fraction, it is no longer the case that a binary product of two factors can be viewed such that the number of instances of one of the factors to be added up is equal to the other factor. We have to redefine things so that we break up one of the fraction factors.
Jump to > Reals. >
I don't think so.
What does it mean to have pi *instances* of e or e *instances* of pi?
Talk about having to bend over backwards to redefine things!
This redefinition process has gone too far to try to preserve this "repeated" way.
And what I say holds if we use the "times" term instead of the "instances" term - we have to redefine things too much.
I think it best to say that this "repeated" way applies properly only to naturals and integers. To amplify this:
All of what I have said thus far holds even though via certain algebraic properties we can rewrite any product in any field that contains the natural numbers as a repeated finite sum of the same field element in a number of different ways equal to the number of elements in the field: For all field elements u,x and for every natural number n (excluding 0, for those who define them to include 0) there exist field elements v,y such that uv = x = ny, so that from x/n = y we have uv = x = n(x/n), then from n being equal to n instances of 1 added together we have uv = x = (1_1 + ... + 1_n)(x/n), and then uv = x = [(x/n)_1 + ... + (x/n)_n].
As far as I am concerned, this does not in the least support those who claim that this binary operation we arbitrarily call "addition" is more fundamental a binary operation than the one we arbitrarily call "multiplication". I say this because this "repeated" thing is derived entirely as an artifact of a certain set of algebraic properties on a set under two binary operations such that I could have derived an entirely different result - the opposite result in fact, according to my above - had I just used a different set of algebra properties on a set under two binary operations.
>> That is, with such a general enough definition students would from the >> beginning have a mental model of binary multiplication that holds up >> on any set on which absolute value is defined. And this includes the >> natural numbers all the way up through the real numbers. >> > > I think you'll find "repeated addition" can usually be stretched to > the reals by an avid teacher of that meme. The transition from Q to R > is quasi seamless.
One way would be the above I gave?
But like I said afterwards above, does not mean what those who ant to believe that one binary operation over a set is fundamental to the other one. It's all what algebraic properties we do and do not have. (Don't forget what I said about examples like that of the Boolean algebra I mentioned.)
>> For product xb, with can think of pulling the tape out to length b, >> and then pulling it out further from there by a scaling factor of x to >> a point represented by the product xb. >> >> Of course we have to explain what we mean by scaling, that we think of >> length b as a unit, but this is good thing: It creates a mental >> foundation for proportional reasoning very early on. (Using the number >> line as model regardless has the same effect, in my view. It helps the >> mind see the scaling inherent in the moving away from 0 when we are >> multiplying positive elements.) >> > > Again, it's not important to pick fights between proponents of > "repeated addition" and "scaling vectors" as if these have to be > treated very differently. > > You can "repeatedly add" a unit vector to itself 0.8899 times to get a > shorter "scaled" vector: Is that scaling or repeatedly adding? I'd > say there's no basic difference. >
I'd say that we have to really redefine what it means to do something n "times".
As I said further above, we have to go way beyond natural usage for these terms in question of doing something 0.8899 or sqrt(2) or especially pi or e "times" or having 0.8899 or sqrt(2) or especially pi or e "instances" of something.
That is, indexing for multiple terms summed together uses the natural numbers. Let's see you write out a sum such that the indexing set itself is the rationals or the reals.
To anticipate: Even if there were some wild and esoteric way of coming up with some sort of indexing set like that, that would therefore mean it's a good idea to teach schoolchildren to think of indexing sets as that strange - to think of doing something x "times" or having x "instances" of something in such strange usage of these terms?