There seems to be much confusion and quibbling about the meaning of "common sense" ... and many disputes, because of differing meanings for the phrase. As always, the only *true* meaning of a word or phrase is whatever is meant by whomever uses it to express their own ideas.
I confess to using , "common sense" with an ad hoc meaning. Several projects on which I work call for communicating with the general public about how all of core-curricula mathematics (K-calculus) can be somehow learned through humanly natural, rational thinking (but not always by following the traditional curriculum). The best lay-language phrase that I have found for referring to that kind of thinking/learning is "common sense."
I have no quarrel with someone else's preference for using some other title for such thinking ... nor will I argue about what some others may choose to mean by the same phase. What is important is whatever imagery is being expressed. Sure, by using more technical vernacular, within a far more elaborate exposition, it would be possible to give a far more precise characterization of such *mathematical learning* ... versus *non-mathematical learning* of the same "math points." But that monograph would use about 50 pages, and be read only by very few.
Within that context, I often use the phrase, "common sense" to exploit *one* lay meaning for that phrase: "Anyone who knows [such and such] ought to be able to 'see' that [so and so]." In THAT meaning, "common sense" is about the natural processes of human *thinking* ... rather than referring to what they think about or what they already know. It also is about how all humans are heavily dependent on learning, through personal reasoning.
True, effective *use* of THAT kind of "common sense" (in any real world setting) heavily depends on what one already knows, and on one's developmental state of maturity. But that kind of thinking is humanly natural, is essential to personal survival, and is normally exercised in only very rough forms. In highly refined form, it is the crux of mathematical comprehension and mathematical reasoning.
What I call "common sense" is badly violated by the traditional American core-curriculum in mathematics. That happens because, for students, THAT kind of "common sense" must take the form: "Any *student* who knows [such and such] ought to be able to *rationally perceive* [so and so]." The traditional curriculum is wrought with points at which very few students can progress, rationally ... and must instead progress, "socially."
The delta-epsilon criterion for a function's limiting-value at a point ... or its continuity at a point ... can be learned as a commonsensible *theorem* by all calculus-ready students, but NOT as a commonsensible *definition* of limits/continuity. Anyone who does as "Mr. Johnson" reportedly did badly fails to teach mathematics as an art of rational learning. At the other end of the curriculum, "Miss Johnson" routinely does much the same thing with "place values."
Normally, very young children intuitively and naturally acquire and use internal "schema" that comprehend wholes-scalars quantities and scalar operations with them. Their under-standings for those schema ... from which they derive those schema ... include their earlier growth in the semantics of *kinds of things* ... including their primitive perceptions of cardinal numbers. "Anyone who knows [those footings]" can use the same as under-standings for *rationally concluding* the same kind of 1-dimensional, "apples & apples" algebra that all normal children conclude.
>From a mathematically advanced viewpoint, it is easy to recognize that the child is gradually acquiring the essentials of vector-algebra ... albeit, at the child's own level of cognitive development. [Vector algebra is not whatever college students experience in linear algebra courses. It is a cognitive theory about its own concepts and conclusions ... regardless of how much of it is taught, how it is taught, to whom it is taught, the degree of formality through which it is taught, the degree of refinement through which it is known, or the developmental or mathematical maturity of the students. Even the title is a variable.]
It is widely recognized that the child so under-stands, develops and uses schema which amount to personal "theories" about such quantities ... which is why all primary-level teachers rely on quantities of "apples". Unfortunately for all, primary school curricula do not go the further step of guiding children to abbreviate "four apples" to "4A" ... as needed for the child to later make common sense of "four tee" ... as the 4T meaning of "40."
In real life, guided by their linguistic experiences, children quite naturally extend their personal theories of scalar quantities ... into "knives, forks & spoons" theories of "combos" of quantities. Unfortunately, curricula rarely lead very young children to express their table-setting combos as "3K+4F+5S" formulas ... much less, to add/subtract such combos, or to multiply/divide them by numbers. The very young easily write, read, and use such formulas ... even when the "initials" have variable (kind-of-things) meanings. So, that omission is simply a curricular malady.
It is hardly surprising, then, that children who are deprived of that kind of curricular experience with combo-algebra commonly have much difficulty in interpreting "345" .... in its "place-value" meaning, 3H+4T+5S ... even though kids are trained to *pronounce* "345" as "3(hundred),4(tee),5." Without owning such poly-name-ial meanings, children commonly have extreme difficulty in *rationally concluding* the theorems about how to combine such combos. And are they later led to rationally conclude the hows and whys of "simplifying" such vectors as 3(4ths)+5(6ths)?
In the same vein, the traditional curriculum badly fails to present the delta-epsilon criterion as a rational derivative from what calculus-ready students already know. Despite its technical nicety (brevity) , it conceptually is a lousy definition. When/if it actually is needed (e.g. for rationally deriving some formulas for derivatives or integrals), delta-epsilonics can be rationally developed as a fully commonsensible *theorem* ... AFTER the calculus-ready student already owns the limit/continuity concepts as personal common sense. Specifically, "Anyone who knows [the lub-glb concepts of limits/continuity, as under-standings] can rationally conclude [the delta-epsilon criterion]."
Recursively, "Anyone who knows [the usual pre-calculus material ... especially the real-ness of the real numbers] can rationally conclude [the lub-glb definitions of limits/continuity]." On the basis of that under-standing, students then can common-sensibly conclude the delta-epsilon criterion.
Some may ask about how does one "conclude" a "definition?" The answer is that (contrary to our non-commonsensible curriculum) all *sensible* mathematical definitions are derived from yea/nay existence proofs: (1) First, establish that there exists a kind of (generator) things, all of which satisfy some specified conditions [Those "postulated" conditions constitute an *abstract* that comprises all of those generators, and perhaps an infinitude of other cases. That abstract *defines* that kind of things]; and (2) establish that there are other things that are somewhat like those of the first kind, but not fully so.
With that information, the role of definitions is simply to distinguish the generator-kind of things from all others. Its yeas are examples which fit that abstract. Its nays are counter-examples satisfying some of the defining postulates, but not all.
Without knowing of some such generators, in advance, their encompassing "definition" can have no meaning until a *spanning* family of examples are owned. But the commonsense mode of defining a concept is to derive it from some previously owned generators. Then, too, without prior knowledge of some contrary cases, the de-finit-ion fails to serve its delimiting purpose.
For purposes of developing under-standings for the common-sensibility of the concepts of limits and continuity, there is no better approach than through the pre-calculus comparisons and contrasts of functions that feature various kinds of discontinuities ... as well as continuities ... well before technically defining limits of functions.
Likewise, for imparting the common-sensibility of Hindu-Arabic place-values, familiar and useful under-standing, counter-examples include: the (2-place) dollar-cents pairs; the (2-place and 3-place) digital clocks and timers; some ways of identifying dates; and some commonplace measurement systems [e.g. 5yd., 2', 7" and 47^o,42',13"].
For the benefit of those who doubt that "Anyone who knows [pre-calculus] can rationally conclude [the limit/continuity concepts]", the lub-glb "window" development is sketched, below.
&&&&&&&&&&&&&&&&&&&&&&&&&&& [Real-ness: For every partition of a line of real numbers, into two non-empty intervals, either the upper-part has a min, or the lower-part has a max ... never both, never neither: there are no gaps (the "both" case: as with integers), and there are no infinitesimal holes (the "neither" case: as happens among rationals).]
For a real-entries function that has real-number values, consider the following [this CS development generalizes to functions from within R^n# into R^m#]:
For each element, h, *near* to the domain of such a function, f, there exists a family of delta-neighborhoods about the point, h.
[A delta-neighborhood about number, h, is any open-interval, centered at h, but with the h, deleted ... whence the label, "delta." The center's "nearness" to Dom f means that each d-neighborhood includes points of Dom . ]
That family is a "delta-nest" centered at h ... and that nest *converges* to h, as its center. [That center is not actually within any member of that nest. The reason for deleting the h is that, in case h is within Dom f, the (h,f(h)) point might get in the way of defining limf at h. But for defining continuity of f at h, it makes more sense to leave h in the picture.]
[The term, "zoom" is used because all such students know about "zooming" the window/screens on computers. "Anyone who knows [about zooming], ...."]
Each delta-neighborhood about h identifies its own delta-part of the function, f ... classically called "the restriction of f to [that d-neighborhood of h]. The d-nest about h so identifies a corresponding nest of delta-parts of f. (If h is actually within Dom f, the (h, f(h)) point is not within any of h's delta-parts of f.)
Then consider what happens to the ranges of those delta-parts of f, while the d-nest converges to h.
[The following conditions are CS *theorems* that calculus-capable students can easily be led to conclude as personal *common sense* ... if they earlier grasped the pre-calculus instruction in real-ness.]
Over each d-neighborhood, the values of its delta-part of f might: (a) be doubly unbounded: no upper-bounds or lower bounds; (b) be upper-bounded... in which case it has a lowest upper-bound (lub)... an "upper lid" for the graph of that delta-part of f; (c ) be lower-bounded... in which case it has a greatest lower-bound (glb) ... a "lower base" for the graph of that delta-part of f; or (d) be fully bounded ... in which case, that delta-part of f has a both glb (a "bottom border" for a "window"), and a lub (a "top border" for a "window") ... where the lub of the values that delta-part of f always is no less than their glb.
On the graphing calculator, the h is the midpoint between the X Min entry and the X Max entry. Although the two Y-settings might be reset to approximate the lub and glb for that delta-window for that delta-part of the function, doing so would distort the graph. Far better to keep the screen set for a square grid, then in-zoom by keeping h as the horizontal center, and use the tracer to estimate the lub and glb. Drawing horizontal lines at those values will show the lub-glb window for that delta part of f.
If the function is fully bounded over some delta-neighborhood about h, it is fully bounded over every smaller delta-neighborhood about h. So, from there on, all zoom-in window frames fully capture their delta-parts of f. In the "zoom in" squeeze ... done by using ever smaller "diameters" of the d-neighborhoods ... the delta-parts lubs may vary, but never increase; they converge, downward. Concurrently, the delta-parts glbs may vary, but never decrease; they converge, upward.
So, when the d-neighborhoods are down-squeezed toward their center, a, the f-window is in-zoomed to ever-better reveal how f behaves, *near h*. As the d-neighborhoods *converge* to point, h, the upper and lower upper and lower "border values" also converge ... and the window-frames converge to an interval. The upper end of that interval is the *upper limit* of f, at h; the lower end of that interval is the *lower limit* of f, at h. If those are equal ... that number is *the limit-number for f, at h ... and the windows converge to a single point. All else easily follows from there.
CG - -------------------------------------------------- From: "Joe Niederberger" <email@example.com> Sent: Sunday, November 11, 2012 9:16 AM To: <firstname.lastname@example.org> Subject: Re: How teaching factors rather than multiplicand & multiplier confuses kids!
> Excuse me, but I feel the need to clarify the post copied below. Clyde was > saying his description of "zooming" and "convergence" (see below) was the > "common sense view" of a continuous functions, to which I say WHAT? > > The common sense view of a continuous function is as I sate further > below -- the graph of which I can draw without lifting the pencil. I think > the conversion of that intuitive view into a definition involving limits > was a great achievement for mathematics, and took some time to hit upon, > nothing common sense about it. One must first struggle with Zeno's paradox > to appreciate it. > > Joe N > > - --------------------------------------- > Clyde Greeno says: >>But the essence is that: (1) students first must grasp that the line of >>rational numbers is dense ... but also having a density of "irrational" >>holes, and (2) also perceive how the "zoom in" squeeze on a function, at >>each point within or outside its domain, converges to a (sometimes empty, >>sometimes single-point, sometimes otherwise) "vertical" interval. > > I would say the common sense view is that a continuous function is one > whose graph can be drawn without lifting the pencil off the page. What you > describe is a mental picture to go with the more advanced understanding. > All well and good and I wouldn't be surprised if someone has created a > nice interactive computer animation to illustrate. > > Joe N > - -------------------------------------------