
Re: How teaching factors rather than multiplicand & multiplier confuses kids!
Posted:
Nov 13, 2012 2:48 AM


Re: continuity [Joe N.'s call for clarification]
There seems to be much confusion and quibbling about the meaning of "common sense" ... and many disputes, because of differing meanings for the phrase. As always, the only *true* meaning of a word or phrase is whatever is meant by whomever uses it to express their own ideas.
I confess to using , "common sense" with an ad hoc meaning. Several projects on which I work call for communicating with the general public about how all of corecurricula mathematics (Kcalculus) can be somehow learned through humanly natural, rational thinking (but not always by following the traditional curriculum). The best laylanguage phrase that I have found for referring to that kind of thinking/learning is "common sense."
I have no quarrel with someone else's preference for using some other title for such thinking ... nor will I argue about what some others may choose to mean by the same phase. What is important is whatever imagery is being expressed. Sure, by using more technical vernacular, within a far more elaborate exposition, it would be possible to give a far more precise characterization of such *mathematical learning* ... versus *nonmathematical learning* of the same "math points." But that monograph would use about 50 pages, and be read only by very few.
Within that context, I often use the phrase, "common sense" to exploit *one* lay meaning for that phrase: "Anyone who knows [such and such] ought to be able to 'see' that [so and so]." In THAT meaning, "common sense" is about the natural processes of human *thinking* ... rather than referring to what they think about or what they already know. It also is about how all humans are heavily dependent on learning, through personal reasoning.
True, effective *use* of THAT kind of "common sense" (in any real world setting) heavily depends on what one already knows, and on one's developmental state of maturity. But that kind of thinking is humanly natural, is essential to personal survival, and is normally exercised in only very rough forms. In highly refined form, it is the crux of mathematical comprehension and mathematical reasoning.
What I call "common sense" is badly violated by the traditional American corecurriculum in mathematics. That happens because, for students, THAT kind of "common sense" must take the form: "Any *student* who knows [such and such] ought to be able to *rationally perceive* [so and so]." The traditional curriculum is wrought with points at which very few students can progress, rationally ... and must instead progress, "socially."
The deltaepsilon criterion for a function's limitingvalue at a point ... or its continuity at a point ... can be learned as a commonsensible *theorem* by all calculusready students, but NOT as a commonsensible *definition* of limits/continuity. Anyone who does as "Mr. Johnson" reportedly did badly fails to teach mathematics as an art of rational learning. At the other end of the curriculum, "Miss Johnson" routinely does much the same thing with "place values."
Normally, very young children intuitively and naturally acquire and use internal "schema" that comprehend wholesscalars quantities and scalar operations with them. Their understandings for those schema ... from which they derive those schema ... include their earlier growth in the semantics of *kinds of things* ... including their primitive perceptions of cardinal numbers. "Anyone who knows [those footings]" can use the same as understandings for *rationally concluding* the same kind of 1dimensional, "apples & apples" algebra that all normal children conclude.
>From a mathematically advanced viewpoint, it is easy to recognize that the child is gradually acquiring the essentials of vectoralgebra ... albeit, at the child's own level of cognitive development. [Vector algebra is not whatever college students experience in linear algebra courses. It is a cognitive theory about its own concepts and conclusions ... regardless of how much of it is taught, how it is taught, to whom it is taught, the degree of formality through which it is taught, the degree of refinement through which it is known, or the developmental or mathematical maturity of the students. Even the title is a variable.]
It is widely recognized that the child so understands, develops and uses schema which amount to personal "theories" about such quantities ... which is why all primarylevel teachers rely on quantities of "apples". Unfortunately for all, primary school curricula do not go the further step of guiding children to abbreviate "four apples" to "4A" ... as needed for the child to later make common sense of "four tee" ... as the 4T meaning of "40."
In real life, guided by their linguistic experiences, children quite naturally extend their personal theories of scalar quantities ... into "knives, forks & spoons" theories of "combos" of quantities. Unfortunately, curricula rarely lead very young children to express their tablesetting combos as "3K+4F+5S" formulas ... much less, to add/subtract such combos, or to multiply/divide them by numbers. The very young easily write, read, and use such formulas ... even when the "initials" have variable (kindofthings) meanings. So, that omission is simply a curricular malady.
It is hardly surprising, then, that children who are deprived of that kind of curricular experience with comboalgebra commonly have much difficulty in interpreting "345" .... in its "placevalue" meaning, 3H+4T+5S ... even though kids are trained to *pronounce* "345" as "3(hundred),4(tee),5." Without owning such polynameial meanings, children commonly have extreme difficulty in *rationally concluding* the theorems about how to combine such combos. And are they later led to rationally conclude the hows and whys of "simplifying" such vectors as 3(4ths)+5(6ths)?
In the same vein, the traditional curriculum badly fails to present the deltaepsilon criterion as a rational derivative from what calculusready students already know. Despite its technical nicety (brevity) , it conceptually is a lousy definition. When/if it actually is needed (e.g. for rationally deriving some formulas for derivatives or integrals), deltaepsilonics can be rationally developed as a fully commonsensible *theorem* ... AFTER the calculusready student already owns the limit/continuity concepts as personal common sense. Specifically, "Anyone who knows [the lubglb concepts of limits/continuity, as understandings] can rationally conclude [the deltaepsilon criterion]."
Recursively, "Anyone who knows [the usual precalculus material ... especially the realness of the real numbers] can rationally conclude [the lubglb definitions of limits/continuity]." On the basis of that understanding, students then can commonsensibly conclude the deltaepsilon criterion.
Some may ask about how does one "conclude" a "definition?" The answer is that (contrary to our noncommonsensible curriculum) all *sensible* mathematical definitions are derived from yea/nay existence proofs: (1) First, establish that there exists a kind of (generator) things, all of which satisfy some specified conditions [Those "postulated" conditions constitute an *abstract* that comprises all of those generators, and perhaps an infinitude of other cases. That abstract *defines* that kind of things]; and (2) establish that there are other things that are somewhat like those of the first kind, but not fully so.
With that information, the role of definitions is simply to distinguish the generatorkind of things from all others. Its yeas are examples which fit that abstract. Its nays are counterexamples satisfying some of the defining postulates, but not all.
Without knowing of some such generators, in advance, their encompassing "definition" can have no meaning until a *spanning* family of examples are owned. But the commonsense mode of defining a concept is to derive it from some previously owned generators. Then, too, without prior knowledge of some contrary cases, the definition fails to serve its delimiting purpose.
For purposes of developing understandings for the commonsensibility of the concepts of limits and continuity, there is no better approach than through the precalculus comparisons and contrasts of functions that feature various kinds of discontinuities ... as well as continuities ... well before technically defining limits of functions.
Likewise, for imparting the commonsensibility of HinduArabic placevalues, familiar and useful understanding, counterexamples include: the (2place) dollarcents pairs; the (2place and 3place) digital clocks and timers; some ways of identifying dates; and some commonplace measurement systems [e.g. 5yd., 2', 7" and 47^o,42',13"].
For the benefit of those who doubt that "Anyone who knows [precalculus] can rationally conclude [the limit/continuity concepts]", the lubglb "window" development is sketched, below.
Cordially, Clyde
&&&&&&&&&&&&&&&&&&&&&&&&&&& [Realness: For every partition of a line of real numbers, into two nonempty intervals, either the upperpart has a min, or the lowerpart has a max ... never both, never neither: there are no gaps (the "both" case: as with integers), and there are no infinitesimal holes (the "neither" case: as happens among rationals).]
For a realentries function that has realnumber values, consider the following [this CS development generalizes to functions from within R^n# into R^m#]:
For each element, h, *near* to the domain of such a function, f, there exists a family of deltaneighborhoods about the point, h.
[A deltaneighborhood about number, h, is any openinterval, centered at h, but with the h, deleted ... whence the label, "delta." The center's "nearness" to Dom f means that each dneighborhood includes points of Dom . ]
That family is a "deltanest" centered at h ... and that nest *converges* to h, as its center. [That center is not actually within any member of that nest. The reason for deleting the h is that, in case h is within Dom f, the (h,f(h)) point might get in the way of defining limf at h. But for defining continuity of f at h, it makes more sense to leave h in the picture.]
[The term, "zoom" is used because all such students know about "zooming" the window/screens on computers. "Anyone who knows [about zooming], ...."]
Each deltaneighborhood about h identifies its own deltapart of the function, f ... classically called "the restriction of f to [that dneighborhood of h]. The dnest about h so identifies a corresponding nest of deltaparts of f. (If h is actually within Dom f, the (h, f(h)) point is not within any of h's deltaparts of f.)
Then consider what happens to the ranges of those deltaparts of f, while the dnest converges to h.
[The following conditions are CS *theorems* that calculuscapable students can easily be led to conclude as personal *common sense* ... if they earlier grasped the precalculus instruction in realness.]
Over each dneighborhood, the values of its deltapart of f might: (a) be doubly unbounded: no upperbounds or lower bounds; (b) be upperbounded... in which case it has a lowest upperbound (lub)... an "upper lid" for the graph of that deltapart of f; (c ) be lowerbounded... in which case it has a greatest lowerbound (glb) ... a "lower base" for the graph of that deltapart of f; or (d) be fully bounded ... in which case, that deltapart of f has a both glb (a "bottom border" for a "window"), and a lub (a "top border" for a "window") ... where the lub of the values that deltapart of f always is no less than their glb.
On the graphing calculator, the h is the midpoint between the X Min entry and the X Max entry. Although the two Ysettings might be reset to approximate the lub and glb for that deltawindow for that deltapart of the function, doing so would distort the graph. Far better to keep the screen set for a square grid, then inzoom by keeping h as the horizontal center, and use the tracer to estimate the lub and glb. Drawing horizontal lines at those values will show the lubglb window for that delta part of f.
If the function is fully bounded over some deltaneighborhood about h, it is fully bounded over every smaller deltaneighborhood about h. So, from there on, all zoomin window frames fully capture their deltaparts of f. In the "zoom in" squeeze ... done by using ever smaller "diameters" of the dneighborhoods ... the deltaparts lubs may vary, but never increase; they converge, downward. Concurrently, the deltaparts glbs may vary, but never decrease; they converge, upward.
So, when the dneighborhoods are downsqueezed toward their center, a, the fwindow is inzoomed to everbetter reveal how f behaves, *near h*. As the dneighborhoods *converge* to point, h, the upper and lower upper and lower "border values" also converge ... and the windowframes converge to an interval. The upper end of that interval is the *upper limit* of f, at h; the lower end of that interval is the *lower limit* of f, at h. If those are equal ... that number is *the limitnumber for f, at h ... and the windows converge to a single point. All else easily follows from there.
CG   From: "Joe Niederberger" <niederberger@comcast.net> Sent: Sunday, November 11, 2012 9:16 AM To: <mathteach@mathforum.org> Subject: Re: How teaching factors rather than multiplicand & multiplier confuses kids!
> Excuse me, but I feel the need to clarify the post copied below. Clyde was > saying his description of "zooming" and "convergence" (see below) was the > "common sense view" of a continuous functions, to which I say WHAT? > > The common sense view of a continuous function is as I sate further > below  the graph of which I can draw without lifting the pencil. I think > the conversion of that intuitive view into a definition involving limits > was a great achievement for mathematics, and took some time to hit upon, > nothing common sense about it. One must first struggle with Zeno's paradox > to appreciate it. > > Joe N > >   > Clyde Greeno says: >>But the essence is that: (1) students first must grasp that the line of >>rational numbers is dense ... but also having a density of "irrational" >>holes, and (2) also perceive how the "zoom in" squeeze on a function, at >>each point within or outside its domain, converges to a (sometimes empty, >>sometimes singlepoint, sometimes otherwise) "vertical" interval. > > I would say the common sense view is that a continuous function is one > whose graph can be drawn without lifting the pencil off the page. What you > describe is a mental picture to go with the more advanced understanding. > All well and good and I wouldn't be surprised if someone has created a > nice interactive computer animation to illustrate. > > Joe N >  

