On Mon, Sep 10, 2012 at 11:24 AM, Joe Niederberger <email@example.com> wrote: >>Is the multiplication of two irrationals, especially two non-algebraic irrationals like the product e(pi), a computable procedure? > > The multiplication of two computable numbers is computable under commonly accepted definitions.
> > The product of two real numbers is in general, not computable. In general, an arbitrary "real" number is not > nameable or definable. You have no handle on it whatsoever, other than to say it occurs on the mythical number line somewhere. You can even define some real numbers that no one knows or has a method to figure out *where* they occur on the line. > > You say we teach these "ral" numbers as everyday number to children and that's true. I'm not sure its necessary or good, but its true. But we forget to tell them some key differences. > > We teach them that multiplying two numbers means taking the two numbers and finding the answer. That's a computation. >
You argued that scaling should not be taught to kids since it's not a "computable procedure" for elementary kids. I therefore meant "computable procedure" as you used it, in the context of elementary school kids being taught which implied clearly computing an exact right answer, not compute "roughly" the right answer.
And in general, this discussion on repeated addition and scaling presupposes that we are not talking about binary operations on a set giving us approximations, but binary operations on a set in the usually defined functional sense, where two elements in the set go to one element in the set, *not* two elements in the set going *approximately* to one element in the set.
This is yet more proof of what I keep saying: The repeated addition model for multiplication for the natural number subset of the reals has to be more and more redefined to the point where it can only be tortuously redefined to try to make it fit more and more of the reals, and then finally to the point where it is not applicable at all to almost all real numbers even with the most tortuous redefinition.
But scaling works for all of the reals easily and perfectly and exactly, no redefinition ever needed.
And so to be against including the teaching of scaling as a model for multiplication is not defensible. There is no good reason not to include it even along with teaching repeated addition as a model when those contexts allow for repeated addition.
When we merely imagine that there "exists" an answer (that we cannot name, or compute) that's a key difference I would say. Those are rather different games to be playing. > > I played Avalon Hill games as a kid. Gettysburg, Stalingrad, Panzer Blitz. I imagined I had armies filled with men and equipment and supply lines and even variable weather conditions. Only a few of the mean had actual names -- mostly they were dealt with in aggregate form. (Similar games are played by real generals, to aid in decision making.) Vividly imagining these "men" and what not was an essential part of the game, without which the game would lose its luster and probably be impossible to take interest in anymore. So its is with infinite precision numbers and infinitely long straight lines. >
So you're saying that something is no good and to be denied and all that if it can't be represented as a finite decimal number? But almost all rational numbers require an infinite decimal expansion and can only be given an exact finite representation as fractions.
You are saying that mathematics should "lose its luster and probably be impossible to take interest in" if it is mathematics that is or is based on that which is continuous or non-computable?
Are you saying that the "proper" response to being confronted with mathematics that is or is based on that which is continuous or non-computable is to deny it?
Are you saying that we should accept only mathematics that is or is based on that which is discrete or computable?
If so, then you are denying essentially all of precalculus and calculus and then just about almost everything that comes afterwards in analysis and topology and even a good chunk of abstract algebra - in other words, most of abstract mathematics in general. That is, you are saying that the only "real" mathematics is discrete mathematics or computable mathematics.
This is one very, very extreme position.
Perhaps this denial of all of mathematics that is or is based on that which is continuous or non-computable is an example of what Devlin had in mind when suggested that irreparable harm is being caused to kids by teaching kids that repeated addition is what multiplication *is* - it creates later on such a cognitive dissonance at the subconscious or unconscious level when the person finds out otherwise that the person has no choice but to come up with such extreme answers to the problem.