>Notice I said "understanding what you are using", not "understand how >what you are using works". You shouldn't use your car if you don't >know what the steering wheel does or what the rules of the road are.
Limiting oneself to use only things we understand is a pretty limiting proposition !
>I think formal logic should be taught fairly early - probably in the >first few grades. (It doesn't need to be rigorous formal logic; the >basic meaning of "if/then", "and", "or", and some applications should >suffice at that level.)
I don't think that's even necessary, let alone advisable. There will be time to handle formal inference in the future, early grades is not time for grooming inferential reasoning; there are more important and more pressing things to address.
>Computers are tools. We use computers for many things. One of the >things for which we use computers is to assist us in the computations >that we need to do so that we can get information out of our models. >The model is not the computer program; the computer program is the way >that we do a calculation.
I could say the same about mathematics, no ?
>The modeller will, at most, ask the computer programmer to implement a >function, or class, with certain properties. The programmer may, at >their discretion, redefine "/" to make programming easier. Good >programming style guides will tell you to be careful when you overload >standard things not to violate people's expectations about what should >happen, and that the ease should be for the user of the class, not the >programmer of the class.
The modeler is the programmer, the programmer is the modeler. Programming ain't coding, and the set of people I call programmers decide on our own what we're going to develop and how. We overload things whenever we need to, period, there ain't no such a thing as the holy grail, not even mathematics can claim that status.
>What you're missing is that no definition should be changed by a >modeller; if they want different properties, or rules, they must use a >new name for the thing that has those properties.
Definitions are only good inside a model, and a model is what a modeler does. Definitions outside a model can be irrelevant to the model; definitions inside a model are up for grabs, and as long as a definition is useful we are entitled to keep it. Names are just labels that we stick to objects, I can name anything any way I want as long as it suits my need. Names have scope too, they only exist inside their assigned scope, and names can be reused, or "overloaded" as we refer to such reuse in computerese.
>If they start creating their own rules without being familiar with >some rules already, they will probably be wasting their time coming up >with rules that cannot be satisfied, or that are pretty much useless.
That has to do with the reality of the space being modeled, not with the inner workings of the model. If a rule doesn't fit computer graphics, we'll find out by looking at computer graphics, not at the rule nor at the model. And there's a fair amount of pretty sound mathematics I don't care for inside a computer graphics program: I will, at best, be operating with a very small subset of mathematics.
>If they are familiar with both a set of rules and the consequences of >them, and possibly why those consequences arise, then they will >possibly have some idea about how to modify, delete, or add a rule (or >even come up with a rule set from scratch) so as to obtain something >interesting.
The only rules that need to be used are those that are useful to modeling the problem space. So, what we must be familiar with is problem space first. We don't study formal math to learn its rules, we learn math so that we gather enough skill to make up our own rules when we need them, and our own models too. In other words, it isn't the rules that we're interested in, it's the mechanisms through which those rules are made. So, when the available math isn't good enough to satisfy applied need, some go and develop their own math; others just drop math altogether and go try something more amenable to give out a result. So, don't give me a definition of fractions or rationals, give me the machinery that I can use to do my own definitions that reflect the reality of the problem spaces I'm dealing with. That's what I want from mathematics.
And even then, we may not care about the detail any further to put in a formula into a computer program. For example, if I'm writing an adder in Verilog HDL, it is sufficient to know a few things:
1. any number can be expressed as a string of bits 2. if a and b are bits, a+b+carryin = result plus carryout, where 2a. result = (........) some expression, only important to the computer 2b. carry = (........) some expression, only important to the computer 3. a simple recursive or iterative procedure allows us to compute multiple-bit results and carries:
carryin[i+1] = carryout[i] result[i+1] = some expression on carryin[i+1], a[i+1] and b[i+1] carryout[i+1] = some other expression on the same parameters.
Now, whatever those missing expressions happen to be, they will probably be of passing importance in understanding the concept. They may be important to the mathematics involved; they may be fundamental to the computer program; but in the global picture, they're just mere detail that, in the end, can be looked up on a book if need be, and forever relegated to the machine to handle. If I know the intuitive concept of boolean operations and of addition, it's like invoking a library procedure: ADDIT, will you ? I don't care how the procedure does it, as long as I get a result that matches my intuition or that matches my need in the problem space I'm modeling. And that is because I'm not a mathematician or a math major, therefore, I'm not really interested in the hows or wherefores here - I'm just interested in a tool that works, and as long as I know how to use that tool, I'm happy.
So, if I can add through a few rules that I don't even need to know the detail, what else may I need to know about addition's inner works ? Might as well delegate it to the machine.
More, "interesting" here is sort of irrelevant, what some of us want is something that WORKS, interesting or not. We're not in the quest for intellectual gratification, we're in the quest for something we can delegate to a machine and get it out of our minds altogether, because there's other more important and more pressing things to occupy us. More often than not, I'm happy to know that I have a rule to do X, and as long as I can write that rule inside a computer program, I don't care what that rule says; it's a black box to me, and only rule mechanics and rule engineers will be interested enough to open that black box and figure out its innards. Somebody might harp at me, "but where's your intellectual curiosity ?" and I'll remind that person, that, well, I'm an ENGINEER, eh ? I'm out here for results, for things that work, for things that move. What was the title of that Richard Scarry's childrens book, "Car and Trucks and Things that Go" ? Or something like that ? That's where I am, and together with me, many of us who do applied stuff for a living.
>You'll notice that it's called "operator OVERloading", not "operator >loading", and that every computer language is built upon some >fundamental rules (or functions or operators).
The "over" bit applies to the syntax side, not to the semantics. The reason we call it "overloading" is because of the way programming language syntax works. At compile time, an operator is nothing but a scribble, and there are a limited amount of scribbles in the vocabulary. Language syntax is a messy proposition, and the syntax of your garden variety programming language precludes us from having but a handful of operators. And here, again, by "operators" we don't mean your understanding of the word in semantic terms, but we mean entities at syntactical level. So, we "over"load our operators because we have so few of them, not because we're changing a holy grail kind of semantics.
So, in the past we said that the scribble "+" denotes addition. Well, today we say that +(int x int -> int) describes integer addition, +(float x float -> float) describes floating point addition, and even here we say that they're not the same operator. Now I can define an operator +(string x string -> string) or +(string x character -> string), or even if I'm messy enough, +(string x integer -> string), and these are all different operators. I'm overloading the scribble, mind you, not the semantics.
>I am in the twenty first century, and I am very familiar with >computers. I am very familiar with definitions, too, and with logic, >and with mathematics (though of course I am not an expert in all areas >of mathematics) and with physics and several other things. I try to >look at things from lots of perspectives. As I said, computers are >tools; we use them to do the things we want done. Notice that it is >we who want to do things with a computer, not the computer that is >telling us what we want to do. (Certain software packages excepted, >of course. :->)
Mathematics is a toolkit too: we use it to do the things we want done.
And you know what ? Computers do tell us what to do, all the time. In fact, they don't even need to tell us what to do, often they just take over and do it themselves, and that basically happens when the job to be done is beyond human physical or mental capacity. In just about anything a computer can do, it does it better than a human.
And ah, it isn't the computer that's doing it, eh ? It's that piece of magic we call "the program".
>Your program does not implement fractions, it implements floating >point division and fails to satisfy some basic properties due to >rounding errors. A correct implementation of fractions would store >two integers internally, (Num,Den), and would implement addition by >(A.Num*B.Den+A.Den*B.Num,A.Den*B.Den).
My program, just like your definitions, implements a view of the intuitive concept of division. And any real life implementation of anything will end up having to face rounding errors: infinite precision only exists inside mathematical models. And a mathematical model that doesn't fit reality is inadequate to describe that reality ! .
Kids have, to a larger or smaller extent, their own intuition of whole, of part, of counting, of fractional quantities. That's why intuitive props such as Cuisinaire Rods work, because they anchor things on intuition. Do we want to wean our students from overrelying on intuition ? Sure. At that early stage ? I don't think so. It's one thing to teach rules that formalize something we are intuitive familiar with, it's a totally different thing to establish such rules by "fiat" and say, "it is so". No, t'aint so ! Rules are a model, and before we apply a model to something, we'd better have some familiarity with what we're modeling.
>Delete the word "model" from that last sentence, and I agree. However, >many of those mathematical objects have proven extremely useful in >modelling reality.
Mathematics is nothing but a collection of models. Mathematical objects have no existence in reality. To the extent mathematical modeling matches reality, math is a set of useful tools. That is, to the rest of us, non-mathematicians.
>You mean if you're in the realm of your modified integers in which you >have given a definition to 1/2 as an integer. In the integers as >mathematically defined, 1/2 is not defined.
Inside my model I define division in whatever whay I need it to be defined ! It's part of my prerogative as modeler to adopt whatever definitions I feel I need. See, this is where mathematics majors and the rest of us seem to part ways, you guys seem to be interested in "the" division, while I'm interested in "a" division - and I may end up having many such in my models, and neither will be totally the same as "the" division as you mathematicians see it to be.
>Now you run into problems, because your modified integers are also >5-digit rational numbers; 0=0.00000. So you end up with >0.00000=0=2/3=0.66666.
If 2/3=0 and 2/3=0.66666, those two "/" don't denote the same operator ! The first denotes / (int x int -> int), the second denotes / (float x float -> float). If I have integer 0, that is not the same as the floating point number 0.0 either. You see, numbers inside a computer have types, and while there are rules for automatic conversion, the fact that I can divide 2 by 3 and get 0.66666 means that there are two implicit conversions from integer to float. The sequence to get 2/3=0.66666 is 3 tofloat 2 tofloat floatdivision, while the sequence to get 2/3=0 is 3 2 integerdivision.
The same thing happens with the compare. When you compare 0 with 0.66666, I have at least two distinct compare operators, compare (int x int -> boolean) and compare (float x float -> boolean), so, if I compare 0 and 0.66666 I must first either convert 0 to float or else I must convert 0.66666 to integer, and the result of the comparison will be true of false depending on what operator I use. The bottomline is, just saying "compare" is not enough, one must give the domain and the range too.
>The problem, of course, is that you are using >"/" to mean lots of different things at the same time, so really >you're using horrible notation. (To fix this, you could do something >like "/_n" to represent your modified division in n-digit rational >numbers.)
You see, just saying "/" is not enough. We're dealing here with the concept of polymorphism: /(int x int -> int) is not the same as / (float x float -> float) and both are different from /(float x float -> int). The label of the operation ain't the operation, nor does it uniquely define the operation !
> It's worse, though, because 2/3*3 should equal 2, not >1.99998. (And then there's the problem of your definition of division >as repeated subtraction - and that reciprocals don't multiply to 1 - >and many others.)
The problem here is, again, one of representation. If I compute 2/3*3 in base 3, 2/3 is .2, and .2*3 is 2. The loss of precision happens because of restrictions on what kind of representation I can afford to have, either inside my head or inside my computing machines.
>Of course there is nothing preventing you from defining these "/_n" >operators. But the reality is, when we ask a computer to give us some >representation or floating-point approximation to "a/b", we are >usually ultimately interested in division as mathematically defined >(unless we have overloaded "/" to mean something else, which will >confuse the users of our classes if a and b are numbers of any sort).
In real life, we're actually interested in the approximation, because we may find that the mathematical entity represented by a/b may not exist within the restrictions of the problem space we're manipulating. It's not that I made "/" mean something else, it's that from the beginning "/" is either an ambiguous label given to a family of potentially dissimilar operations, or that the domain and range of the operation have become so entrenched in custom and tradition that we have problems seeing "/" as anything else. But even in mathematics, we have no qualms overloading operators: for example, A/B is a very different animal from decimal division if A and B are sets. The sum operator can be applied to numbers, to vectors, to arrays: the meaning of the scribble "+" is overloaded too.
>The set of rational numbers is the field of fractions of the >integers. The field of fractions is a set constructed using integers; >an element of the field of fractions can certainly be called a >fraction.
You can model real life division like that, but when you do that you're severely complicating an intuitive issue, well beyond what's needed for an lot of real world application. So, do I want to teach it that way to my students ? Right now, my answer is no, I don't, not really. To me, a rational number p/q is simply "p parts out of q", or after a little more skills development, "p stated in base q with a suitable scale factor". You know, if fits my pragmatic slide-rule-groomed engineer's intuition way better than the abstract algebra model.
>The result of dividing 1 by 10 is the fraction (1/10). 0.5 means >5*(1/10)=5/10. The decimal notation is defined in terms of fractions.
The result of dividing 1 by 10 is a number. That number is one part out of ten, an intuitive concept; it is represented by .1 in base 10. Likewise, the result of dividing 1 by 4 is one part out of four, or .1 in base 4. The fraction 1/10 can be seen as a mere piece of notation that stands for that number. That notation is not the same as that number ! For example, 1/10 and 2/20 are precisely the same number, so is 2/14 if I'm working in hex or 11/11110 if I'm working in binary.
>Right - rationals or fractions are ordered pairs of integers.
They're actually triples, you forgot the number base: and hence notation. But even abstracting the base, to me the pragmatic engineer, there's a sharp distinction between 2/3 or [2;3] or (2 3) or whatever notation you want to use, and the result of dividing 2 by 3, which is, as I said before, .2 in base 3, or two parts out of 3. Yes you can model that intuitive concept as a pair of integers, and it's ok to do so, provided you accept all the time that it's nothing but a model. The moment you replace model for reality, that's the moment you start bringing confusion into everyone's life but the mathematician's.
>And what, exactly, is R?
Whatever you want it to be. How about "the set of all numbers that can be represented with a finite or infinite number of digits in some finite positive integer base " ? Or something in that general direction. But that's only of interest to mathematicians: to the rest of us, we'll never need to worry about any number that cannot be expressed with a finite number of digits.
>Where is the completeness axiom in your computer? You never model Q >as a subset of R in your computer.
Inside my computer, the axiom of infinity does not apply. Therefore, R doesn't exist. A subset of Q exists inside it, limited by the machine's precision or memory size. But I can see set membership more like an attribute relationship: in mathematics, 2.6 belongs to Q; inside a computer, "Q" is an attribute of 2.6, so Q belongs to the set of attributes of 2.6, and that set will always be a finite one: I can just as well model a number as a pair (value, attribute set), and jot down in the attribute set one scribble for each attribute I want that number to have, I create such set in the constructor for my new "number" class. Just like I can say "R is blue", I can say "2.6 is Q". Now, I can define "blue" just the same as I can define "Q", but there's an intuition for Q out there, just like there's an intuition for "blue" too.
>If you are adding terminating decimals, there is no need to get rid of >the dot, you just have to line the dots up. But things get messy if >you're actually talking about rational numbers (not just terminating >decimal numbers), because addition starts from the least-significant >digit (which does not exist).
Every rational number is terminating in some number base: if the number is p/q, it's representation is p in base q, with a fractional point somewhere inside that representation. But still, why should I bother aligning the dots ? I can just pad the numbers on the right side with as many zeros as I may need, drop the dot, do the computation, then put the dot back. In the end, applied world fractional arithmetic reduces to integer arithmetic, or we wouldn't have computers.
Addition indeed starts in the least significant digit, but after I pad the two values with the right number of zeros, I can add zeros ad infinitum to the right side of both values, and that's not going to change my addition one iota: all it's going to do is to shift my scale factor. You see, I can always add in four steps: normalize, add, compute the scale factor, apply the scale factor. Normalization is a notation step, addition and scale factor computations are integer arithmetic, and applying the scale factor is again a notation step.
So, I can easily abstract that adding 2.53 to 0.5 is (1) Count the maximum number of digits to the right of a decimal point, that's my scale factor, (2) add zeros to the right of 0.5 to get it to have two digits to the right of the point, (3) get rid of the decimal points, yielding 253 and 50, (4) add these integers to yield 303, and (5) apply the scale factor back to yield 3.03. I never need to bother with learning anything but integer arithmetic ! The rest is manipulation of scale factors. Similar techniques work for multiplication and division.
>Every rational number whose denominator has no prime factors other >than 2 and 5 can be so written. That misses a lot of them (and no >change of base will fix this) - in fact it misses "almost all" of >them.
You miss a lot of them if you only operate in base 10. When you say num/dem, you're representing num in a base that's a power of dem: that power is the scale factor. Problem is, if I am limited in the kind of base I can use, I'm going to have to resort to approximation, but hey, such is life.
>Yes, so (1/2)+(2/3) would be computed as 1.1, or 1.16, or 1.166, or >something like that, depending on some choice that somebody makes >about how wrong we want to be.
Or 1.1 in base 6. See, it's a representation issue.
>Either that, or we require correctness, and this would require that we >choose an appropriate base every time we need to add fractions... but >this is at its core the same as the mathematical way of adding >fractions (only there's all that extra complication about choosing >bases, converting decimal representations from one base to another, >and the great convenience of having to write down the base we are >using every time we do anything).
We do not have infinite precision in real life, so, correctness is always up to an error. That's why in engineering we say that there's a difference between 0.5 and 0.50 ! In the second case, we know that the rightmost digit is zero; in the first case, however, we don't know what actually follows the 5. we should rather write 0.5x if what we want is to work with two digits of precision.
>Yet you argue that we should not be teaching algebraic manipulation >skills or understanding of the process, instead relying on number >sense and intuition. Algebraic manipulation skills and "the process" >include the things that I have been saying we should be teaching: >logic and properties of operators. If we don't teach these until we >"need" them, then we will be wasting time teaching them rather than >using them, and probably teaching them many times - the physics class >will need them, the math class will need them, the chemistry class >will need them.
I'm not arguing we shouldn't teach algebraic manipulation skills, I'm arguing that we should evolve those skills out of an intuitive base and not to establish them through a set of abstract rules. If I need no more than arithmetic number sense to develop most of the math even an EE will need, why should I bother using anything else ? Properties of operators spring out of daily intuitive usage to the point of becoming intuitive and making the use of formal rules a superfluous proposition. The physics class won't need the properties of operators, the physics and the chemistry classes will rather need the intuitive number sense and the level of manipulation that only an intuitive pattern matching and a strong level of repetition and exercising can deliver.
>I did not suggest that we should. I am suggesting that we lay a solid >logical foundation. If we want to teach complex numbers or >quaternions to, say, tenth graders, then the foundation will be >there. If we want to teach other things, the foundation will be >there. The foundation is the familiarity and comfort with >manipulations using rules.
A complex is a pair subject to some specified semantics. A quaternion is a tuple subject to some specified semantics. But I may not care more about the embedded semantics than to know it exists, and to be able to copy it from the definition into a line inside my computer program - and it's going to be way more important to me to know how to associate quaternions to graphics objects than to know what's inside the semantics of a quaternion.
The foundation is rather in the fact that I can create my own objects and attach to them my own semantics. So, I may have an application where I may want to change the bog-standard garden-variety semantics of quaternions, and the real foundation that I feel I need to teach my students is that such semantics is not a holy grail and is not to be taken at face value: rather, I tell them, use it if it suits you, if it doesn't, overload it, redefine it, modify it, whatever. Models are for modelers to handle to their best interest and advantage ! That is, in fact, the foundation I believe we should be interested in.
>I listed people like lawyers and accountants and such last time; let >me extend that list to include construction workers, who often use >Pythagoras' theorem as well as several tricks for drawing circles or >ellipses (none of which can be accomplished using finger-counting).
I don't see any such people using that kind of math, and those "tricks" you mention are now intuitive to most, they've been transmitted by tradition.
>I don't know what you mean by "abstracting" a scale factor, but every >rational reduces to two integers - a scale factor (the denominator) >and the numerator. That's what I've been saying.
When I write 0.12, decimal, what am I writing ? I'm writing 12 with a scale factor of 100. If I multiply 0.12 by 4, I get 0.48: 48 with a scale factor of 100. If I multiply 0.12 by 0.4, I get 0.048: 48 with a scale factor of 1000. So, whether I multiply 0.12 by 4 or 0.12 by 0.4, what I can rather do in both cases is, just multiply 12 by 4, get 48, and apply the right scale factor. In other words: I can reduce any operation to the realm of integers by operating separately on the integer fractions and on the integer scale factors.
That's how computer floating point works, eh ? And that's how slide rules work too. And that's what calculators resort to, when you blow up their integer precision.
>Huh? Perfectly divisible means that you get an integer. What you >have done is partitioned 10 into 2+3+2+3 (or if you want to go all the >way back to finger-counting, you have partitioned a set of ten objects >into four sets of 2, 3, 2, and 3 objects respectively). So 10/4 >should be somewhere between 2 and 3.
Musicians ignore that "perfect" division. In other words: the mathematical abstraction does not lead to a very useful model. Many musicians I know abstract playing in fives to playing in twos: one 2- beat, another 3-beat. Go back to that old Dave Brubeck "Time Out" record, listen to "Take Five": tada-taaaaaaaaaaaa; tata-taaaaaaaaaaaaaaa; tatatata-taaaaaaaaaaaaaaaa; tata-taaaaaaaaaaaaaa; It's really in twos, eh ? The first is a 2-set, the next is a 3-set. Now go listen to the last movement of Prokofiev's Seventh Sonata, it's rather tata-tatata-tata, you divide 7 into three. I will call it "asymetric division" for the lack of a better word; that movement is in seven beat, but it's rather in three beat if you listen to it carefully: an energetic but slow-tempo asymetric waltz. So much so that musicians can take pretty heavy liberty with the tempo and most people, even trained ears, will still accept it as a "seven", even though it's going to be twisted more and more into a real "three".
>Nonsense. I don't know how you got 4 out of 1-2, 3-4-5, 6-7, 8-9-10, >perhaps it was a typo (or "math-o"), but using the partitioning idea >you can easily recover the familiar quotient-with-remainder >expression, N=PQ+R; P is 4, Q is 2, and with not too much work you can >convert the 2+3+2+3 partition into R=2. Thus N/P=Q+(R/P), and we >still need R/P.
This is music, eh ? Not math any longer. Here, 10 is indeed divisible by 4. You have to approach it from the other end: take a 4-beat, now fill two of those beats with 2 notes and the others with 3. Now relax the tempo so that you don't have to play all four beats at the same time, but give the music a sort of "flat tire" feeling - and that doesn't really have to be absolute 2-3-2-3 either, and let me tell you, if you lose the asymetric-4 feel of the rhythm it doesn't sound right.
>If you cannot see the need for mathematics in CS or EE, then you are >probably still defining mathematics far too narrowly. Many of the >courses in EE, CS, physics, mathematics, chemistry, and other majors, >are really very similar courses but flavoured slightly so as to >emphasize various details or applications that are more relevant to >that discipline. Just because the course is not named "Math 207" does >not mean that it is not mathematics, or even that math majors do not >learn the same or very similar material.
Physics courses are anchored in the reality of the universe and its behavior. It can never be pure math, because it has to match problem space. Just like computer graphics involves art and intuition, and my experience with having math majors as students is that they miss a whole lot because they don't have the training to do anything in concrete space.
>Did you, in your EE education, never solve a differential equation, or >analyze a differential equation (or "linear system") for stability?
Like I said before, a derivative is a ratio of two numbers, and much EE is not about solving equations but rather about describing physical reality with them. And then we often get to the point where it is not tractable to solve equations into closed form, so what people do instead is to use numerical methods - approximations. Which leads to the other approach, simulate it: no matter how much we compute something, we'd be nuts if we only relied in the math. Engineering is about modeling things with math, sure, but then, we apply all sorts of fudge factors to pad our computations with extra safety, then we build prototypes and hack them to death in our test labs. Then we look for repeatability - we often want to be able to do something over and again, and have processes that are foolproof.
And that goes well beyond math, you know, because if nothing else, fools are so clever.
>Yes, and the point is that we are using the digital tool to >approximate the analog thing, so that we can perceive it with our >analog processes.
There ain't such a thing as a really analog process. The difference between what I call digital and analog is merely one of granularity.
>I think what you will actually see is that we cannot tell the >difference between two things unless the difference is greater than >some threshold.
In other words: precision-limited, and hence digital.
>However, this does not imply that our perception is >digital: there is no "magic number" at which we perceive the change. >This is certainly true in acoustics.
There is no single magic number, but we can safely take a lower limit and say that, up to our knowledge, no human can separate two sounds if the difference of their frequencies is smaller than that threshold. More, the same applies to our instrumentation: we run out of precision at the lab too, meaning, it's digital at the end. The only thing analog is the math model we put on top of it.