Chipping Away at Mathematics:
A long-time technophile’s worries About
Computers and Calculators in the Classroom*

E. Paul Goldenberg

When computers and hand-held calculators were first gaining wide currency in classrooms, their introduction was accompanied by great hope (and hype) about what could happen and also by dire predictions about what would happen. But because they were new, what would really occur was all a matter of speculation. Research has suggested some of the answers, but by any reasonable standards the technology is still new, and its effects are (quite naturally) still only partly understood. Over the more than 25 years that I’ve been involved with computers in mathematics learning, I’ve shed some old hopes and worries, but I’ve also developed new ones. Because much of the contribution to this conference is focusing on the hopes and the research, I will (without being bleak or gloomy) confine myself to listing my new worries, worries that are not, to the best of my knowledge, yet well addressed in the research. I’ll also try to suggest ways, conjectural though they must be, of avoiding what I see as potential pitfalls.


Education is rapidly being suffused with large amounts of ever-newer electronic stuff. It is often said that this technology forces–I’d prefer "invites"–us to rethink what is important in mathematics education, what new opportunities we have and what old burdens we can now shed.

There’s a long history of pro- and con- claims about electronic computational technology. People talk of its power and flexibility, appeal, ubiquity in the "real world," the empowerment kids feel when they conquer it, the access it provides to previously inaccessible topics, and so on. Others counter with dire predictions that we will all forget how to do any thinking beyond pushing buttons. Some embrace the value of interactive software, the reduction of drudgery so that students’ attention can focus on the Big Ideas, and technology’s efficiency at promoting this or that kind of learning. Others issue caveats against autistic relationships with machines, computers replacing teachers, and spoon-fed screen-bites or sound bites replacing the depth of books. And so on.

From time to time, one also reads condemnations of the computer that say something like: "Computers have been in the schools for over a dozen years, they’re now everywhere, and our kids still can’t...." The logic is spurious. Books and blackboards have been around even longer.

Computers are nothing but what they’re used for, and what they’re used for changes constantly, with new capabilities and new tastes that develop in response to those new capabilities. Few teachers have both the technical adaptability to accommodate to the shifting hardware and software, and also the coherent vision necessary to develop a consistently evolving approach. The more intrepid enjoy the adventure and the state of constant innovation, but this multiply uncontrolled experiment is a nightmare for any formal evaluation of the impact of computers on education. The actual "treatments" vary all over the place. The teachers, both those who are enthusiastic about the technology and those who are not, remain constant novices; and even as we narrow our target by looking, for example, only at the use of a single piece of software, students’ tastes, experience, and expectations change right under our noses, as do the perceptions and demands of their parents.

There’s no doubt in my mind–and I suppose this is one of the reasons I was invited to set my thoughts down in writing–that the opportunities afforded us as educators by some of the currently existing computer tools are tremendously exciting. And as the technology becomes more powerful and more widespread, the possibilities will continue to grow. But after the more than twenty-five years that I’ve spent developing, promoting, researching, and writing about computers in mathematics learning, I still feel that what we actually know about students learning with computer is, at best, impressionistic. I’ve seen things (as different as Logo, the Geometric Supposer, and word-processing software) that make great face-value sense to me and that, handled well, are so obviously "right" in class that no fancy statistics are needed to prove the case. I’ve also seen classes that would have been better off had these tools not been used.

In preparing to write this essay, I tried to think what I could possibly say about technology in mathematics education that would be new to a group of readers who have, themselves, given great thought and attention to the subject, who have probably read what I’ve read, and who, furthermore, have invited many experts to distill the relevant collected wisdom and knowledge. I decided not to review research or to advocate for computer use–those bases would be covered–but to list potential pitfalls in technology use in mathematics education from the perspective of a long-time (and current) enthusiast.

On the face of it, the one thing that would seem certain is that having great tools gives us more choices about how to teach. But even in this there is a hitch. When lots of new choices are offered all at once, we are not necessarily prepared to choose wisely, and certain choices preclude others.

In this essay, I hope to show that some ways of thinking about technology may have the effect of narrowing our view of our choices instead of expanding it. In fact, there is a sense in which this essay is not about the technology at all, but about issues that were always present and not adequately recognized until technological advances rubbed our noses in them. It is also about seduction, and about directions taken without conscious decisions having been made.

The News in Brief

Capsule versions of this paper’s main points appear in the abstracts below. Details will follow later.

Mathematics, not technology, must lead

Discussions about computational technologies in the schools often stress the rapidity of change and how very hard it is for schools to scramble to keep up with the changes: new software makes old hardware obsolete, new hardware makes old software obsolete, new capabilities change people’s expectations. But this is a cart-before-horse perspective. The new computational technologies make certain things easier and other things harder. It is easy to get seduced by the possibilities, constrained by the limitations, and driven by the momentum. These forces are poor guides for educational change. Good decision making must keep technology the servant and not the master.

Ideas have more than one role

Without technology, certain computational techniques were indispensable in order to find answers. But here’s what’s often overlooked: Some of these techniques had, in addition to their basic computational function, other important benefits, benefits that remained largely invisible because they could be taken for granted. One didn’t need to think about the side benefits because they came "for free" with the required technique. Chucking a technique because technology has rendered its computing function obsolete may also mean chucking these "side benefits," resulting in troublesome gaps in students’ mathematical knowledge and understanding. The long division algorithm–often used as the example par excellence of a foolish post-calculator teaching–is a case in point.

Empowerment requires control

With the old pedagogies, although most students passed their courses, many of them–even very smart ones–learned just enough to get by. Only a very small number developed what we sometimes call "mathematical understanding." Technology offers the lure of an alternative, by which students can gain access to important mathematical ideas without the protracted skills-acquisition period that used to be the only route and that, by many accounts, failed anyway. But are we making sure that the students whose parents couldn’t (or at least didn’t) master algebra will become true masters of their spreadsheets, dynamic geometry, and other computational technologies? Or will their electronic tool skills remain just barely passable, as were the algebraic skills of their parents, effectively replacing one set of barriers with another? What will actually happen, of course, is an empirical question we must wait to answer, but what we’d like to have happen involves a principled decision that we must actively make now.

Computational technology for learning vs. computations for work

Students and professional people bring different backgrounds to their use of technology, and they also bring different questions and needs. For engineers and business managers, it is often the particular answer to a particular question that is of primary importance. For students, the opposite is most often the case. Likewise, scientists and mathematicians interpret their calculation or technology-based experiment in the light of their background knowledge; students typically are building that background knowledge from the experiences they have with the technology. These differences suggest ways that the technology should and should not be used in learning.

Access or excess?

Technology makes some mathematical activities and problem domains much more accessible by removing the drudgery or supplying the computational brawn. Because of this, technology makes it possible for certain important mathematical ideas to take root in intuitive forms earlier than would otherwise have been possible, potentially laying a valuable foundation for the later formalization of these ideas. But sometimes the ideas are genuinely more subtle than they appear, and early access trivializes or distorts them.

Fallout of information age.

Computers can influence culture just by streamlining and amplifying what prevails. The greatest influence electronics has had on education is not anything done in a classroom, but rather the shift toward machine-scorable multiple-choice tests. This supports a school epistemology that gives very heavy weight to knowledge of facts. From this perspective, the SAT is the intellectual parent of "200 things your second grader needs to know," and computers may be abetting the forces against the stated goals of curriculum reform. The rush to the Internet and the push to get data analysis into the curriculum–both made possible only by the new computational technologies–further suggest that the current Zeitgeist focuses on gathered information, rather than on the systematization of ideas or the fostering and formalization of reasoning. Is this what we want?

Mathematics, not technology, must lead

"To add technology" is not a worthy goal.

It is not only in the K-12 arena that we hear the call to use technology and to integrate it throughout the curriculum. In a special session on reform in undergraduate mathematics education at the 1997 national Joint Meetings of the MAA, AMS, SIAM, and AMATYC, most of the presenters I heard listed "increased use of technology" as one of the principal goals of the reform their particular institution was making.

Certain uses of certain technologies clearly have tremendous potential for promoting mathematical learning, and so it makes perfect sense that efforts to improve mathematics education would give this new option a serious try. That involves making a commitment to the technology itself, putting effort into rethinking and redesigning pedagogy and curriculum, and performing an "experiment" (namely, the use of the new approaches and materials) long enough to see whether: (1) in practice, the idea does achieve the results we’d hoped for; (2) it continues to show high potential, but does not yet seem properly implemented and needs some debugging; or (3) it is, as near as we can tell, not such a good idea after all. It can therefore quite reasonably be the goal of a materials development project (e.g., Concepts in Algebra (Fey, et al., 1995), SIMMS (xxx cite), and MMAP (xxx cite)) or of a mathematics department to give technology a major, central role for the sake of performing the experiment. And the presenters at the Joint Meetings might have had much the same idea in mind when they called "increased use of technology" a "goal" of their reform efforts. Their language may well have been a kind of shorthand for the point I just blathered on about with much greater verbosity.

But I’m not so sure. To me, many conversations about education (not just about mathematics education, and not just about technology use) have a ring about them that leaves me unconvinced that the discussants have clearly articulated their basic values (goals) and distinguished them from the technique(s) that they theorize will best help them to achieve those goals. In mathematics courses, mathematical learning, not technology use, is the goal. Technology use is a means by which we theorize this goal might be well achieved, and though we have completely compelling case examples that show that certain technologies used in certain ways by certain teachers have certain salutary effects, broad claims that Use Of Technology improves (mathematics) education make no sense and are poor platforms upon which to build reform.

Discussions about how difficult it is to get teachers to use technology make it apparent enough that it is not pressure from them that brings technology into the classroom. Neither is it reasonable to conclude that the technology has been adopted by schools in response to education research. The low cost of the early ’80s’ micros made it easier for schools to buy computers, but what made it possible was not what made it happen. The great flood of chips into the schools happened because industry recognized schools as a vast new market, and sold the public and the schools through astute advertising, creating a widespread perception of need where none had generally been felt. And if government policy–the various mandates and funds for research and implementation–may happen to be leading the schools, it nevertheless appears to be following the lead of economic interests.

Schools are put in the position of "coping" with changing mandates and changing technology. This is not a position of strength from which we might expect the best teaching to occur. Teachers who deeply understand the new technologies and their subject matter, and who were already good teachers, will perceive the new technologies as new opportunities. Good teachers who do not yet understand the technology so well but are "game" enough may be able to turn their new learner role to advantage in the classroom. But other teachers–even very good ones–who are uncomfortable with technology they don’t understand may lose some of their flexibility when forced to use it.

Opinion-summary 1: Good thinking needs all the help it can get. The goal is to help all students develop mathematical ways of looking at things and to provide the background that allows any students to develop advanced mathematical skill and understanding. Technology that supports this is desirable; technology with which we must "cope" is not.

Opinion-summary 2: It may be important for guidelines to state the amounts or nature of technology that have been empirically found to be useful in classrooms. Nevertheless, the focus of the guidelines must be on what is done with that technology, and must make clear how the technology serves goals that are independently well understood and accepted (not generated because of the existence of the technology).

SpaceDemon wipes Supposer!

There is another sense in which technology is driving change, perhaps in ways that we don’t want and certainly in ways that we, as educators, have not deliberately chosen. Computer capabilities grow fast–from UPPER CASE TYPEWRITER PRINT ONLY (not so long ago!), to varied print and line graphics, to fancy 3D graphics, to multimedia with full speed animation and sound, to the Internet–and people’s expectations grow accordingly. Tools like the original Geometric Supposer or Logo just plain look obsolete, even if their educational potential might be judged to remain unsurpassed. In a sense, technological advances have become the enemy of some of the original advantages of technology. For example, students who, back in the print and line graphics days, could have enjoyed the intellectual task of solving a problem and would have felt smart and powerful designing an algorithm to make the computer do what they want it to do–e.g., draw a complex figure–can now do these tasks without specifying an algorithm. The problem is solved, so they no longer get to solve it. Moreover, programs they could have been proud of look crude by today’s visual standards, making it hard for kids to buy into their own programming creativity. (On the other hand, it seems to be true, at least at the moment, that if you can let kids publish their program as a Web applet, it’s exciting again.)

And, though technology can be a strong motivator, that argument for technology use seems spurious to me. Money and candy are also strong motivators. Sex, too. It’s not unreasonable to think that the motivators are constantly upping the ante on what’s considered "cool," and tend, on the whole, to increase the need for still more motivation.

Ideas have more than one role

Formerly just for the "answer."

At the 1998 national meeting of the NCTM in Washington, D.C., a panel convened to discuss the role of algorithms in a mathematics education. The moderator presented, at the beginning, a set of questions that asked the panel to define "algorithm," decide which ones (if any) were important for students to learn and when, and to discuss the compatibility of "algorithms" and "understanding." Despite some differences in perspective and style, the panelists all took the stand that it was important for students "to learn" (whatever that meant, but it remained undefined) "some algorithms" (whichever they might be, but they remained unspecified). It was the learning of some algorithms, and not what the algorithms were, that was of mathematical value. For the most part the audience seemed to remain skeptical of the importance of "learning algorithms," which were portrayed (sometimes even by the panel) as if "algorithm" were a synonym for "rote incantation," and as if learning algorithms was necessarily boring, uncreative, and antithetical to understanding. The long division algorithm, in particular, was given as an example of a useless piece of drudgery which makes no sense inflicting upon students in an age when absolutely nobody performs such division calculations on paper any more. In a slightly different context, this argument dates at least as far back as the 1985 NCTM special conference on the impact of computing technology on school mathematics. That conference concluded, among other things, that because of computers and calculators, "proficiency in many familiar computational processes is of little value (Corbitt , 1985, pg. 246)."

I began to feel out of place. Was my esteem for the long-loathed division algorithm an inappropriate conservatism merely defended by my practiced academic ability to find a rationale for almost anything that needed one, or was there really something to it? The way I saw it, the audience participant who said that there’s no point in learning a tedious algorithm for finding quotients that everyone uses a little black box to find was indisputably correct, but finding quotients was not the purpose I saw the algorithm serving. In fact, once the long division algorithm has been stripped of its role in finding quotients, the obvious role left for it is the antithesis of being the antithesis to understanding: it explains a process that is otherwise just a black box.

Repeating decimal expansion.

When might that matter? If one cares that students know that rational numbers have repeating decimal expansions–a matter that reasonable people could disagree about–then the most straightforward way to see why that is true may be by using the long division algorithm. At any step, division by an integer either terminates (the divisor "goes in exactly") or leaves a (non-zero) remainder. The number of different remainders is smaller than the divisor, and so if the division does not terminate, the remainders must eventually repeat. But since, in the lingo of the algorithm, we are eventually just "bringing down zeros," a remainder that has been seen before will create a dividend that has been seen before: the process gets caught in a loop, and so the sequence of quotient digits repeats. It takes only a clear sense of the "rhythm" of the division algorithm to show that in a division by q, it will take at most q-1 steps for the algorithm to repeat: the repeating part of the decimal expansion of will be at most q-1 digits long. The algorithm doesn’t guarantee that all q-1 steps will be needed (for example, all are required in the case of division by 7, but not in division by 3, 11, or 13), but it does guarantee that the repeating must occur. Moreover, this analysis is accessible to middle school students, and if we care that students know how the decimal expansions of rational and irrational numbers differ, then some analysis is essential if the idea is to be understood and not merely memorized.

Knowing the long division algorithm well enough to make this analysis doesn’t require great skill and speed. To that extent, it does not dispute a literal interpretation of Corbitt’s statement, but what most stands out in Corbitt’s statement are the words "of little value." In fact, understanding why things are as they are is not a matter of little value. The only thing that diminishes about the division algorithm in a world with calculators is its value for finding quotients.

Approximation and successive refinement.

Furthermore, explaining the repeating decimal is not the algorithm’s only residual value. Understanding how the process works–knowing the rationale for each step and being skilled enough to feel its rhythm without getting bogged down in each step–supports other goals still recognized in the mathematics curriculum. For one thing, the importance of ballpark estimates is increased in a world in which numbers can fly past us at great speeds. In data analysis, it is also often our goal to determine the accuracy of the estimates. The long division algorithm is, precisely, a cycle of estimating, determining the error in the estimate, and refining the estimate.

Connections with other mathematical ideas

By itself, this would be a poor reason to hang on to the long division algorithm, as there are certainly other opportunities to see the successive refinement cycle in action and one might reasonably argue that it should be seen where it arises in practice. But this value is not by itself, nor is an understanding of repeating decimals its only accompaniment. Minor tinkering with the long division algorithm leads to a variety of other processes of explanatory value.

To see how this can be, we start with a simplified schematization of the long division algorithm. Using the Function Machine software, this schematization can actually be implemented and run. To highlight some details, the algorithm shown here suppresses others. (It doesn’t show how a remainder is found by estimating a quotient, multiplying, and subtracting; instead, it uses a "machine" that just provides the integer quotient and remainder, "free.") In the illustration, the DIVIDE machine takes two inputs, a dividend (taken to be 11) and a divisor (7), and produces an integer quotient (1) which is recorded, and a remainder (4). The next step in the paper-

and-pencil algorithm is to "bring down a zero," converting the 4 to a 40. On paper, this is symbol pushing in its most literal sense. In this live and enactable algorithm, I interpret "bringing down zero" as "times 10." The resulting 40 must then be processed in exactly the way that we processed the initial 11. Dividing by 7 leaves a quotient of 5, which we record, and a remainder of 5, which we multiply by 10 and reprocess. And so on, as long as we want. This process records a string of digits–1, 5, 7, 1, 4, 2, 8, 5...–which, except for the absence of decimal point, is the correct decimal expansion of .

Peter Braunfeld points out that the division algorithm, on paper or implemented as a function machine process, is also a reasonable place for a child to encounter infinite processes for the first time. The effort to write a simple rational number like 4/3 as a decimal leads via the division algorithm to an infinite (non-terminating) process. This, by itself, is mind-expanding. As children carry out the division for 4/3, they sooner or later recognize that "this will go on forever," and this can be the basis of a deep discussion about the nature of an infinite process, and indeed, the nature of numbers. This can be made concrete by thinking about the problem of sharing $4 equitably among 3 people, in a world where coins are infinitely divisible into submultiples of 10. The idea is lost, or at least obscured, when an algorithm that generates the successive digits one by one is replaced by the instantaneous appearance of a sequence of 3’s on the calculator. And analyzing when the algorithm terminates (or generates only zeros ever after) is as educational as understanding what happens when it doesn’t terminate.

Now let’s tinker with the algorithm. What happens if, instead of recording the quotient, we record the remainder? This new algorithm has the same three machines–an integer divider, a multiplier, and a recorder. As before, the plumbing feeds the output of the multiply machine back into the divide machine’s dividend hopper. The only difference is that it is the remainder that gets recorded while the quotient gets multiplied and recycled. The sequence of digits produced by this factory spells out (in reverse) the base-7 numeral for the input, first recording a remainder of 4, and then recording a remainder of 1. Changing the 7 to a 3 makes it a base-10-to-base-3 converter. In this algorithm, the

multiply machine does nothing because of the 1 as its first input. So the factory could be simplified by feeding the quotient directly back to the divide machine’s dividend hopper, without going through any intervening machine at all. If we were to recycle the remainder directly back, but to the divisor hopper (and shift the former divisor to the dividend hopper), we’d have Euclid’s algorithm for finding the greatest common divisor. An analysis of Euclid’s algorithm leads to yet another algorithm, one that lets you write the greatest common divisor of two integers as an integral linear combination of the two integers. This is a valuable tool in problems like the "post office" problem, which asks students to figure out what values of postage can/cannot be composed only of stamps of two given denominations. It also the "reason why" we have a fundamental theorem of arithmetic, the implicit result that young students learn when they make factor trees, and the secret to the algebraic structure of the integers. This kind of insight, one that establishes the truth of a statement by providing an explicit suite of algorithms for making it happen, is what mathematics calls a "constructive" proof. The similarity between this use of "constructive" and the epistemological one is striking. And analyzing what causes this algorithm to terminate and what initial conditions would keep it going forever is also rich in important mathematical ideas.

Long division’s structure and its focus on the "high order" end of numbers is also a foundation for understanding polynomial division. Indeed, computer algebra systems can be used to make explicit the structural similarities between the integers and the polynomials in one variable over Q by allowing students to implement in each ring exactly the same algorithms for long division, for computing GCD, and for the arithmetic algorithms that come from these (Cuoco, 1997). Once again, technology can be used to help students make mathematical abstractions, and the route to the abstractions is paved with the analysis of algorithms.

To reiterate, this does not mean that students must spend their entire elementary school career becoming virtuosos at dividing twelve-digit numbers by five-digit numbers. Nor should the many potential benefits of continuing to teach the algorithm be taken as supporting the attacks on NCTM for de-emphasizing long division. To the extent that the various algorithms constitute nearly 100% of the traditional elementary math curriculum, one could de-emphasize a lot and still leave enough. What my argument does suggest is that "de-emphasize" should not be interpreted as "dump." Dumping the algorithm altogether, while of little consequence to finding quotients, might actually matter elsewhere.

Opinion-summary 3: Many ideas have more than one role. Mathematics is a fabric of complexly interwoven threads; its ideas and topics are hard to tease apart. This is explicitly acknowledged in the calls for making connections, and the complaints about teaching subskills in isolation. We should never be surprised, then, to find that any particular idea or topic is useful in or applies to multiple domains. A narrow view of what some mathematical idea is for makes it easy to eliminate. Dismissing such an idea because one of its functions is no longer of value should be expected to cause mischief until shown otherwise.

By the way, it is certainly natural enough to perceive any fact or technique–in this case, long division–primarily in terms of only one of the functions it serves, especially if that particular function (in this case computation) could not be served in any other way. When technology takes over that function, we are tempted to dump the now apparently obsolete technique. Rethinking the subsidiary roles that a fact or technique may have is hard work because it requires anticipating dependencies that nobody ever needed to notice before.

Of course, not everything does have more than one genuine function. In Maier’s (1983) "Soundoff" in The Mathematics Teacher, he described the computer and calculator "revolution that has already eradicated the slide rule, doomed trig tables, and to borrow a phrase from a president of the Mathematical Association of America, will leave paper-and-pencil long division ‘as dead as the dodo bird’" and spoke of the choice facing educators: to try to resist that revolution, as he preferred, or to "put up with its discomforts and take advantage of its ability to vitalize school mathematics." But the three topics slated for dismantling don’t seem equivalent to me. The division algorithm describes the process that creates the repeating decimal; it is therefore a natural way to look at that phenomenon. Even the slide rule retains a value though the stretch is a bit greater: it is a perfect "visual aid" for showing how one can label things that one adds (lengths) in a way that suggests we are multiplying them. On the other hand, it would be an extraordinary stretch to imagine that teaching how to read trig tables remains the best way of doing anything we care about today. Skill in reading tables remains important, but skill in reading those tables, which used to be essential, is totally useless now. As long as we don’t, by a process of gradual erosion, remove all table-reading from the curriculum, we have lost nothing by eliminating these.

One might be tempted to poke fun at my defense of the division algorithm by pointing to the good mathematics one could learn by reintroducing into the curriculum the algorithm for finding square roots that I was taught as a child. In fact, there is good mathematics in many numerical methods–and numerical methods is a subject of value (to some) in a world that builds computational devices. But exaggeration is not helpful here. At the level of proficiency needed for understanding, and at the age level at which that understanding is needed, the division algorithm is useful and the square root algorithm is probably not.

Opinion-summary 4: This is not about algorithms; it is about technology. Some recommendations in the Standards are based on a particular epistemology or on research on learning. The recommendation to de-emphasize the division algorithm (and other such things) is not one of these: it is, at its core, a reaction to technological change. As we should be wary of unthinking conservatism, we should also be wary of change that is "responding to technology" and, instead, ask ourselves what it is we value and how to use (or not use) technology to support our goals. Technology is not a good excuse to drop ideas.

I believe it’s prudent to be conservative about jettisoning ideas from the curriculum. To be sure, close consideration of alternatives and costs may yet show (despite my claims) that the long division algorithm is a pretty worthless piece of baggage, a burden to be scrapped with no tears shed. The point is that technology, by itself, should not be invoked to make such a case.

Learning numerical algorithms used to be the full-time career of elementary school students, and learning their algebraic counterparts the full-time career of adolescents. Such an expenditure of time and attention is hard to justify and must be reduced. So, if we claim, as did the 1998 panel on the role of algorithms, that there is a legitimate role for (at least some) algorithms in the curriculum, we must help a confused public (including teachers, parents, and students) understand what that role is. In particular, it is not just a hedge against dead calculator batteries, but more akin to a constructive proof. In the case of division, it is an analysis of the process of "constructing" the quotient of two given numbers. For this new purpose, proficiency is not to be measured in speed of execution, but in the ability to use, in this case, division of integers as a prototype for, illustration of, or explanation of other mathematical processes and ideas. This new purpose must be made clear in the Standards and must be made visible in the curriculum. That will, in turn, help clarify when not to use paper and pencil calculations but to turn, instead, to technology.

Opinion-summary 5: We must be clear about "appropriate use." Virtually everyone agrees that using a calculator to multiply by 10 or even to add 25 to 31 is an inappropriate use of technology. And virtually everyone agrees that using it to perform messy particular calculations, like finding what percent 3967.54 is of 4135.97, is fine. But whatever it is that allows us to make such a distinction must eventually be owned by students as well, so that our judgment (if it is sound) is not felt as yet another arbitrary school rule. To do so, the Standards, and eventually the curricula, must be equally clear about both the (intellectual/explanatory) purpose of the algorithms and the (computational) purpose of the machines. We might similarly define inappropriate use in algebraic manipulations. Perhaps expanding (a+b)2 or graphing y=2x-1 would be deemed "so easy" that you shouldn’t need technology to do it. We need some principles by which to decide what is and isn’t appropriate use.

If we are convinced that a particular algorithm or idea remains useful even after its (former) computational purpose has been robbed by technology, we might reasonably ask how technology might also help bring it new life.

The Function Machine software provides a wonderful laboratory for tinkering with and analyzing algorithms. Not only can one tinker with the structure of the algorithm (the wiring in the figures above) and begin to investigate relationships between different but apparently related mathematical ideas, one can also play with the parameters. In the division algorithm, I chose the multiplier 10 as a way to represent "bring down a zero," and we understand the meaning of the recorded digits for that multiplier. But what meaning do the recorded digits have when the multiplier is 2 or 7 or 8 and what happens to the notion of "bringing down" in those cases? In a similar vein, the multiplier in the base-conversion routine was an "uninteresting" 1. What meaning, if any, does it have? Is there a meaning that can be given to rational number bases? And, as we’re executing Euclid’s algorithm, what information do we get by recording the quotients or remainders along the way?

Truly important mathematical connections lurk in some of these questions, which are much more easily investigated by tinkering with runnable algorithms than with ones that just sit still on paper. While mathematicians can run these experiments without electronic supports, young students, in general, cannot.

Opinion-summary 6: Analyzing and exploring algorithms is an important part of mathematical thinking. Tools like Function Machines, and programming languages let students construct models of the mathematics they’re learning, and let students build, apply, and analyze and tinker with the algorithms they are studying, and explore their interconnections, all "in vivo." Spreadsheets can also be used in this way, though more limitedly. Geometry tools like Cabri and Geometer’s Sketchpad can be seen in this way, too: as constructors of algorithms defined by geometric rather than algebraic or arithmetic operations. (Notably, calculators are not yet good technology for this purpose. Programmable calculators generally have such wretched interfaces and severe limitations on one’s ability to structure and edit one’s programming that they are far more suited to the engineer who already knows the algorithm and just wants to build and execute it than to the student, who needs to develop and debug the algorithm, and then modify, explore, and understand it.) Such uses of technology can help bridge between abstract and concrete, and help students move fluently in both directions along that path, as needed.

"Algebra is Dead"

The two claims–(1) that ideas have more than one role and (2) that mathematics, not technology, must lead–have an interesting intersection in the case of algebra.

When a member of a meeting of curriculum developers claimed that "algebra is dead," the statement drew applause. "Algebra," in this context, seemed to refer broadly to the teaching of algebraic manipulation skill, and not merely to one or another method for doing so (e.g., some version of the traditional high school course). Perhaps it should be "dead," but, if so, nothing made that so obvious to me that I was ready to rejoice at its death. While school algebra seems to have been moribund for many years, it was not at all clear to me that pulling the plug rather than resuscitating was the right course of action. In fact, I wasn’t even so sure what was being declared dead. Was the body that had been buried and that we’d all seen dying for so many years really the one we’d thought it was? The real algebra, I began to think, had long ago slipped quietly out of our schools to some foreign country, where it is still quite alive and well, while whatever had been done in our schools was just a stand-in for algebra–an imposter.

Calls for changes to high school algebra in response to the newly available computing technology date back at least as far as 1985. An NCTM special conference on the impact of computing technology on school mathematics states

The skill objectives of algebra must be reassessed to identify those procedures more easily done by computer ... or calculator.... The properties of elementary functions are still important for modeling quantitative relationships, but proficiency in many familiar computational processes is of little value. (Corbitt, 1985, pg. 246)

That sketchy statement was made while computational technology in schools was still crude by today’s standards, but the sentiment apparently remains quite current. The full argument will probably seem quite familiar to the reader.

If algebra is only for finding roots of equations, slopes, tangents, intercepts, maxima, minima, solutions to systems of equations in two variables–or any of the other numerical answers we’d want to know for applications outside of (and sometimes even within) mathematics–then cheap hand-held graphing calculators have made the subject totally obsolete. Dead. Not worth valuable school time that might instead be devoted to art, music, Shakespeare, or science. For all the numerical answers listed above, we do not need exact solutions (as if exact solutions could be found much of the time, anyway!), and we need no more algebra than is required for encoding the "real world" situation in a way that our black box can understand. At the push of a button, the box then calculates at a precision far beyond what is required for most practical purposes. We don’t need the distributive law; we don’t need to know how to factor; we don’t need to complete the square; we don’t need the quadratic formula. We do need to understand how to encode situations into suitable symbols for the machine, and we do need to know what question we will ask the machine to solve: what statistical tool to invoke, what part of a graph to inspect closely, and things like that. We do need some plausibility checks: crude estimation to account for miskeying, and some sense for the nature of an answer to avoid, for example, making reservations for 2.8409091 school buses. And, of course, we do need to know what the various new computational tools are, how to use them, and how to choose sensibly among them when we are trying to solve a problem.

The conclusions in the previous paragraph appear to be held by very many people–one encounters them frequently–but if the argument seems unassailable, note carefully that it rests entirely on its first word: If.

The finding of roots, intercepts, extrema, and the like, does not (these days) require more than a small box and knowledge of how to use it, but that is not all algebra is for. How, then, did we apparently come to let ourselves act (or argue) as if it was? As with the algorithms, the mistake really seems pretty natural.

School algebra used to serve many masters, helping people model and solve practical problems in science, engineering, business, and so on, and also helping to describe structure, in particular, the structure of calculations. The kinds of problems that algebra was expected to solve outside of mathematics typically needed numerical answers, and so the primary function of algebra was to enable people to encode the real-world situation in a formal language and then manipulate the form of that encoding (an equation) until it revealed some hidden meaning that they needed (a solution, an extremum, a rate, etc.). Algebra was also asked at times to help people sketch graphs, but again by providing numerical information.

Some ten years ago, when graphing technology was being introduced in a major way in schools, my colleagues and I had done research (Goldenberg, 1988) showing that learners and experts saw different things in a graph, presumably because they brought different backgrounds and different fundamental questions to their observations. If graphing technology were going to be used to good advantage in mathematics classes, we concluded, devices designed for engineers could not simply be dropped into classes with curricula that were designed for a pre-calculator era. Instead, optimal use of graphing technology required changes in curriculum, pedagogy, and even the graphing technology itself. Our work (Goldenberg, 1992) listed thirteen specific "lessons" drawn from that research.

The kinds of changes to the curriculum that we recommended emphasized the importance of what we would later come to call certain "habits of mind" (see Cuoco, Goldenberg, and Mark, 1996; Goldenberg, 1996), but made the conservative assumption that the overall goal was to increase understanding of the mathematics we were then teaching, not to change the mathematical content of the courses. It also focused heavily on the aspect of algebra that I characterized above as "helping to describe structure." This is apparent enough from statements such as the following:

Our students faced the task of determining the three parameters a, b, and c, in the [function f(x)=] ax2+bx+c. They discovered that a fixed the shape of the parabola. All that was left for the others to do was fix the position. Because they knew that c moved the parabola vertically, it was quite reasonable to assume that b moved it horizontally. This assumption got our students into trouble, but it would have been perfectly appropriate if the form in which they were to express their results had been a(x-b)2+c. There is no best form, nor can we exhaust the set of essential forms. Curriculum must therefore draw attention to the issue: the form of a problem influences how we think about it. Altering the form of a problem can help us solve it. We should design curricula to make it very apparent to students that the purpose for an algebraic manipulation on some expression create a tautology: a new expression representing the same underlying object, but changed in form so as to reveal some property of interest. (Goldenberg, 1992)

That research focused squarely on students’ interaction with the graphing tools. None of us (at least not I) had ever anticipated the way that the graphing calculator was to affect the content of the curriculum. Here’s a hindsight conjecture about what happened.

When graphing technology was first introduced, it had been seen, by at least some, as a route into algebra, one of the "multiple representations" that would help students get a sense for what the symbols are all about. Schwartz (xxx cite) and Yerushalmy (xxx cite), for example, use the graphical approach to help students make sense of arithmetic operations on functions, function composition, and the various algebraic manipulations used for solving equations. Over time–about ten years–graphs became, for many, what algebra was about. Part of this may well have been the result of the success of the graphical approach, but that success was abetted by at least one other factor.

In order to use graphing tools at all, the functions must be expressible in an algebra that the tools understand. For most of the graphers, especially as they first began to emerge, that already restricted the domain seriously. Further, to use graphs to find numerical answers, the graphs (at least in the neighborhood of interest) must be continuous and well behaved. Both limitations accorded well with many uses outside of mathematics (e.g., in simple physics or economics), in which the imaginably accessible applications are modeled with continuous "simple" functions. Thus, the tool favored one kind of mathematical problem domain over another, elevating the extra-mathematical modeling role above the structure-of-calculation role. And in that favored domain, the tool also replaced algebra’s role as a method for solving equations (via suitable manipulations) and left only its role as a language in which to express equations. Algebra (as manipulation) is dead.

Opinion-summary 7: Technology can, without our necessarily noticing, change our focus. We may ultimately decide to shift schooling toward analytic functions and away from algebraic form but if so, we should be conscious of making a principled decision, and not merely accommodating the newest technology we have.

Remarkably–at least in these same extra-mathematical domains that the graphing calculator favors–we not only need little symbolic manipulation skill ourselves, but we mostly don’t even need those symbolic manipulations to be performed by the machines. That is, symbolic calculators are of less obvious utility (at least to students) than simple numeric calculators. The purpose of symbol manipulation is to change the form of an algebraic expression to reveal some previously hidden information. In the favored domains, that previously hidden information is generally numerical, but the technology enables us to get numerical solutions without attending to form at all. We have no need to manipulate the symbols, by hand or by machine!

Algebra’s applications inside mathematics, on the other hand, remain intact, as they focus less on numeric solution than on the expression, explanation, proof, investigation, and extension of mathematical phenomena. The graphing calculator cannot show, for example, that all odd numbers are the difference of two perfect squares, or even that the product of two odd numbers is odd. The symbolic calculator can help one perform the manipulations, but even that is of little value until one has figured out how to formulate the problem clearly and has an idea of what patterns to seek. That knowledge comes from studying the structures itself. Seeing that 9 divides 99999 or 9999999 or 99 is easy; showing a polynomial analogue, namely, that x-1 divides xn-1 for very many values of n , is easy with a symbolic calculator; but proving that x-1 divides xn-1 for all values of n requires seeing how the calculation works and arguing either through mathematical induction, or even informally from the structure of the calculation.

Are these intra-mathematical purposes–showing that the product of odd numbers is odd, for example–worth our attention and our students’ time? That is, of course, a matter of opinion, something about which reasonable people could disagree. But my own position is that without an understanding of how mathematics works–not merely how it applies–it becomes yet another incomprehensible tool, despite all our efforts to teach number, symbol, spatial, and data "sense." Without an understanding or mechanism, there is no real sense.

In fact, a focus on the pragmatic utility of mathematical results may work quite generally against the development of mathematical sensibility. If the value of some mathematical result rests in its utility, one hardly needs to understand why or how it works, or pursue the thinking to prove that it works. This supports an epistemology of facts, not reason, despite our intent to do otherwise. Technology similarly permits a how-to, button-pressing view of the discipline, already too deeply ingrained in popular culture.

Opinion-summary 8: The combination of certain computational technologies and an applications orientation may tend to elevate "how to" above "why" despite our best intentions. Without great care, this combination can complicate our job of teaching for understanding, meaning-making, and reasoning about why things work.

Opinion-summary 9: Mathematics is about abstraction. Its value in "real life" derives from the fact that it is not about real life, and therefore applies across wildly different (real life) domains. On the other hand, mathematical reasoning is not an alien thought-form. Mathematics is a part of real life, a refinement of everyday thinking. Mathematics extends real life. Applications of mathematics to mathematics are important.

Empowerment requires control

Teachers of writing rebelled long ago against a tradition of teaching the technical details of writing devoid of any communicative purpose. They held that focusing solely on the mechanics was stultifying, and it discouraged expression, which was, after all, the purpose of writing. The realization was important, and corrective, but the first response was, in many cases, a pendulum swing that worked its own mischief. Having important things to say and never being allowed to say them because of all the attention to minutia is defeating, but it is equally defeating to have important things to say and fail to communicate them for lack of skill.

I feel slightly "stupid" when I’m speaking French, not because I cannot hold a conversation–I can–but because what I can say is not as sophisticated as what I can think, and I can’t be confident that I’m understanding the subtle nuances in what others say. I suspect that one of the reasons that smart people feel stupid in math is quite similar. We who champion the use of technology frequently point out that students can think the big thoughts, even before they’ve learned the language, and that it is the traditional language (e.g., the traditional algebraic machinery) that leaves them apparently unable to function mathematically.

As evidence, we point to the many Big Ideas (e.g., the intermediate value theorem and the fundamental theorem of calculus) that are quite "natural" and can be grasped and honed even by relatively young students as long as The Language (algebra) does not get in the way. Technological alternatives–from programming languages to graphing calculators and from spreadsheets to elaborate special purpose software–have been proposed as alternative access routes to these ideas. But the hitch, as I see it, is not at the idea level. If it were, then switching languages would not so readily cure the problem.

The subtlety here is, in ways, like the issues about bilingual education; both sides of the debate have reasonable arguments. No way should students’ intellectual development be impeded by their lack of facility with one particular language when another language would allow it to flourish: software that gives access to important ideas before the machinery of algebra is fully developed is great. And no way should students be denied full fluency with the language(s) they need for discourse in their domain: in mathematics, this continues to include algebra.

What technology does bring us is new power. Spreadsheets, programming languages, dynamic geometry, and other tools join algebra as ways to model phenomena, describe and analyze mathematical structures, and represent and solve problems. As we do provide these newer tools–and especially to the extent that we allow any of them to diminish students’ proficiency with the old tool–we must be certain that we are not presenting them at such a level of superficiality that they do not, in fact, empower the students to solve problems. This creates a real tension for me. If we teach students how to use a particular tool solve a particular problem, but students do not learn enough about the tool to solve another problem that is equally (intellectually) accessible but (technically) very different, and for which the tool is again the most appropriate, we’ve not been very empowering. On the other hand, I’ve already lobbied for keeping the focus on the mathematics and not on the technology.

For me, the parallels to "learning algebra" are very strong. Algebra is of little or no intellectual value to those who can use it only to solve pre-formulated routine exercises, but it is of great intellectual value to those for whom it is a fluent and expressive medium. The same, I believe, goes for electronic tools. If programming or spreadsheets or dynamic geometry software or symbolic algebra systems are to be of intellectual value to students, the students must be able to use these media competently to represent and manipulate novel mathematical problems. As current curricula and frameworks go, I see no decision to provide students with such facility. Students see programs and may copy them and run them, but can not themselves construct one to represent an algorithm they’re studying. On their geometry software, they cannot build their own models of interesting mathematical phenomena without us scaffolding the steps (or even the button-presses)–beyond the most elementary constructions, it is we who make the models we think the students should see. Students use spreadsheets to enter tables and make graphs, but typically learn very little of the incredible power in these tools.

Likewise, algebra can be approached exclusively as a long list of mechanical tricks–analogous to the menus to pull down and buttons to click–or as a collection of important ideas expressed in an orderly and relatively consistent way. The first perspective relegates it to a kind of "overhead"–little merit in its own rights, but a necessary skill to have before moving on to the "real stuff," like calculus. Similarly, geometry software can be learned entirely as overhead–not part of the mathematics but part of the tool acquisition en route to worthy mathematics–or as itself an embodiment of a valuable set of ideas. For a simple example, constructing a parallel with geometry software requires three things: identifying a point, identifying a line, and specifying that a parallel is desired. Though the method by which these things are specified is idiosyncratic to the piece of software, the three things themselves are mathematically necessary. Depending on the goals of the class, the teacher might connect this, for example, with point-slope specifications of a line or with Euclid’s parallel postulate.

To the extent that we regard these new media of mathematical expression as adjuncts to algebra, we should be as eager to get students good at them as we used to be about getting students good at algebra. If we would strive to teach algebraic technique only through intellectually worthy exercises, and use it in service of intellectually worthy problems, and if we would strive to use worthy problems and important mathematical ideas in service of learning more algebraic technique, perhaps the same could be said about learning the technical aspects of spreadsheets, geometry software, or programming.

Opinion-summary 10: Students were not masters of the old tools (algebra). The lack of competence hampered them and left them without power. It is no favor to give them new tools that they do not master. With a limited set of the most powerful tools, this is a place where we might treat the technology as a goal as well as a technique, just as algebra is a goal as well as a technique. We must approach it in ways that are not intellectually empty but, if we do so, it becomes its own worthy content. We should then find systematic ways to develop competence with the technology over the grades–competence that builds from grade to grade and allows students by high school to use the tools easily and appropriately to solve non-routine problems that match their intellectual and mathematical development.

If we take this approach–deciding to take the new technologies quite seriously and treat spreadsheets, programming, and so on the way we’d formerly treated algebraic and geometric technique, and build moderately advanced competency with them–we then have to decide exactly which new technologies to teach. In a rapidly shifting world that already contains zillions of tools, how could we make such a decision? I find it helpful to think of certain kinds of electronic tools–word processors, programming languages, drawing tools, CAD, spreadsheets, symbolic algebra systems, dynamic geometry software, and so on–as "idea editors." Each of these tools is specific to a particular idea domain–prose writing, algorithm development, artwork or design, computations on arrays of numbers, and so on–but is moderately broad and flexible in that domain. And as a class, these tools provide their own domain’s equivalent of a blank canvas, and provide no agenda of their own. It is up to the user to enter content ideas in accord with their agendas, and then tinker with that content– manipulating, experimenting, editing, debugging, and adjust it–until it "works" for them, meeting criteria they themselves impose. The tools may have a great deal of domain-specific knowledge–for example, spelling and grammar in the word-processors, algorithms for generating closed forms of sums in a symbolic algebra package, or debugging aids in a programming system–but that knowledge is intended as a tool for the user to apply, not as content for the user to learn.

Unlike tools developed primarily for teaching the content, these idea-editors do not become obsolete after the content is learned. They remain expressive media for all problems in their domain. Within reasonable limits, learning how to use these tools is learning about the writing, mathematics, art, or other domain of the tools.

Computational technology for learning vs. computations for work

Different questions

When engineers and scientists use graphers, they are often interested primarily in the behavior of a particular function. Although students, too, must deal with particular functions, most of the educational value is in the generalizations they abstract from the particulars. The shape of -2x2+30x-108 is of no educational consequence, but it may serve as a data point about any of several broad classes: a particular family of quadratics (e.g., ones that differ only in the constant term); more generally all quadratics; still more generally all polynomials or even all functions. (Goldenberg, 1992, pp. xxx)

For learners, any particular problem tends to be just an illustration of something more general that is the real object of their learning. The "answer" at the end of that problem is, at best, a check on their handling of the ideas or techniques and, otherwise, is of no importance at all: understanding a structure, process, generalization, way of thinking, or other organizing idea is the true goal. If technology provides answers too quickly and with the process and intermediate results suppressed, students may never see the structure that lies behind the computations, and thus may miss the important part. The production of 3s, one by one, in the decimal expansion of 4/3 is one example. The way in which (x-1)(x7+x6+x5+x4+x3+x2+x+1) collapses to produce (x8-1) is another example. Just seeing the "answer" doesn’t provide much insight, but seeing the intermediate results does.

Opinion-summary 11: In deciding when to use technology to reduce the work involved in a computation, one reasonable test is whether the particular computation is a distracting step in the midst of a process that is being studied, or whether it is, itself, the process to be studied.

Experimenting requires background.

Scientists and engineers often use software simulations in order to perform a virtual experiment that cannot readily be performed in physical reality. Software for students is sometimes justified in the same way, but I worry about such uses of software simulations in place of reality for students.

For one thing, when scientists, mathematicians, and engineers use simulations to investigate some phenomenon, they bring considerable background knowledge and a robust understanding of the model behind the simulation. They build their models and see how they play out, much as we might see what theorems follow from a set of postulates, and they need the computer only because the complexity of the model makes it computationally intractable.

This is not to say they may not be deeply surprised at the outcome: emergent phenomena, unpredicted global or local behaviors, and so on. In fact, it is evidence of their knowledge and understanding that they can be surprised, and if their knowledge-based expectations are sufficiently jolted, they may go back and check their model for bugs. And when they are surprised, they trust the outcomes not because The Computer says so, but because their model–one which they understand deeply–says so. The authority is in themselves, and the computer merely augments their calculation ability.

The situation is quite different for students. For one thing, students often lack the knowledge to be surprised. (I’m reminded of the time I’d had some elementary students investigate the pattern 1, 1+3, 1+3+5, 1+3+5+7, etc., and found that they were not at all impressed to find 1, 4, 9, and 16 coming out as answers. After all, these sums had to have some answers, and so why not 1, 4, 9, and 16? The latter are only surprising if they seem like old friends found in an unexpected place. But if you aren’t yet sure where your friends hang out, you can hardly consider their occasional appearance unexpected.)

Moreover, today’s students have grown up in a world in which computers can simulate, in convincing 3D, invasions by bizarre extraterrestrial creatures. That is, they can "simulate" anything the programmer wanted them to. The models the students investigate are rarely their own, and they are just acquiring the background knowledge for the first time, not bringing it to a critical evaluation of some structure that was assembled from that knowledge. Together this suggests that the student should never be surprised!

To me, the value of performing an experiment (as opposed to looking up what happens when such an experiment is performed) is that one’s own reason is arbiter of the facts. The facts one derives are facts not because an authority claimed they were, but because one’s own mind interprets reality in that way. A naive student’s "experiment" with a simulation is therefore not obviously much different from looking up the answer: it may be more "convincing," just as the alien invasion is "convincing," and it may be more fun but, without understanding the model, it is simply a case of Truth Because The Computer Says So. For this purpose, I’d rather have a book and a teacher! Only when students are in full control of their models–as when they program, or when they build a geometric construction from scratch–are they able to investigate the models as does the scientist.

The interplay between reason and experiment

There can be no question but that learning to interpret data is of great practical utility, and that the information age has vastly increased the amount of data we regularly face. The computer has also simplified the calculations involved in the analysis, and it provides numerous options for experimenting with and visualizing the data. But the underlying ideas remain remarkably subtle.

For example, analyzing data often involves looking at what did occur and assessing the probability of such a thing occurring by chance. But even some of the most elementary ideas in probability are hard to present without an element of Just-Believe-Me that reform is, in general, trying to reduce in instruction. Consider the ubiquitous penny-toss in elementary school. Kids are asked to toss a penny, say, a hundred times and count the heads. They either do or don’t have expectations about the outcome. If they do have expectations (a theory), those expectations probably (pardon the pun) are that the outcome should be about 50 heads. So what do they do with the experimental results? If the observed number of heads is not 50, is that disconfirmation of the theory? And, in fact, wouldn’t we be a bit surprised if most students in the class did get 50 heads and, if it would surprise us, should we not want it to surprise them? And if one should be surprised by getting what one expects, then what does "expect" mean? But we use words like this all the time with students, and with no clarity at all behind them.

And what is the meaning of this experiment, anyway? Is it to determine the probability as is the case in, say, tossing a shell 100 times in order to determine the probability of "open side up," or is it to accept or reject a theory? If we do have expectations, as with the penny, we let the theory dominate and excuse minor deviations. If we do not have expectations (as with the shell), we must somehow let the data dominate, but how do we interpret those data? Suppose the penny falls heads up 46 times out of the hundred, do we then conclude that the probability of heads is 0.46? Not in the case of the penny: we let the theory dominate and accept 46 as confirmation of the hypothesis that the penny is a 50-50 object. But in the case of the shell, we might feel less certain. Why? And what if we’ve thrown the penny 10,000 times and get 46% heads? At that point we suspect something’s wrong with the penny, that it does not behave in accord with our hypothesis–we let the data dominate and change the hypothesis (for this penny). At precisely what number does the balance shift? That is, how does the student decide when theory dominates over observation and when observation should be used to construct a new theory? There is wonderful mathematics behind these questions, but it is subtle and deep and is not taught, even in high school. Yet these experiments are performed, even in elementary school. What mathematical sense can children make of this?

Of course, the subtleties of probability are not the fault of computers or calculators. Understanding probability, even with pennies, involves either accepting the book’s claims on faith, or making sense of a complex interaction between a theory that involves limits and experiments that are finite. But technology (or, anyway, some of the choices that have been made about using it) has allowed us to engage students in yet more mathematical activities whose foundations remain obscure or mystifying. For example, the use of statistical modeling tools in curricula–curve fitting is a particularly common one–is increasing as these tools become more widely availability on calculators. At the press of a single button, calculators allow students easily to perform the curve-fitting computations (but not see how they’re done) and to use the resulting curves to make predictions, but the subtleties are swept under the rug. This may well be working against the goals of reform.

More generally, if computational technologies do not actually increase the importance of the interplay between "experiment" and reason, they certainly increase the frequency and visibility of that interplay. For example, students multiplying 51.2 times 32.4 on a calculator see the answer, but must know enough about the nature of the computation (including, but not limited to, "number sense" and approximation skills) to check the reasonability of the calculation, if only to be sure they’d pressed the right buttons. Similarly, they may see a particular fact as an experimental result (like the concurrence of perpendicular bisectors of the sides of a triangle) but their mathematical gains result not so much from their acquisition of that fact as from their explanation of it (a point that is equidistant from A and B, and also from B and C, must be equidistant from A and C) or the connections they make to it (e.g., A, B, and C must therefore lie on a circle whose center is at the concurrence).

The computed results in the two examples above are of different types. The concurrence of bisectors is a fact of almost no importance by itself, but it has genuine importance as part of a system, and is perhaps of greatest value as "grist" for the reasoning that builds that system–a phenomenon that needs logical explanation. The product of 51.2 and 32.4 has no worth at all beyond the immediate context: it is not a fact to remember and isn’t even worth explaining, unless the current object of study happens to be multiplication. But in both cases we’d want students to develop the habit of mind of asking themselves if they believe the result, and pursuing the computation far enough to be sure.


Emphasize "idea editors"–word-processors, programming languages, dynamic geometry environments, spreadsheets, symbolic algebra systems, and the like–over content-providers. Drill and practice are useful (if done well) but are not the unique contribution of technology and do not represent a genuinely new opportunity.

Throughout the schools, attempts to "round out" students works against the development of depth or specialized interest and expertise: there’s no time with all that’s going on. The TIMSS report tells us that even within mathematics, there’s too much dabbling and not enough focus. As we narrow and deepen the mathematical content, let us consider doing the same for the mathematical tools students use. Let students become experts at the tools, not just limping dabblers.

Be wary of uses in which technology supplants rather than supplements or helps us develop skills we used to value. Perhaps sometimes it should, but the case must be made on grounds other than technological capability alone.

Despite all my calls to be cautious–which I fully mean–it is clear that curriculum should change. I recall a time when we were excitedly showing the then new Geometer’s Sketchpad to a colleague who had been using the Geometric Supposer happily and successfully for several years in her teaching. Her reaction was most disappointing: "If I want my students to study properties of squares, this thing is terrible. I can’t even get a square without first going through a lot of trouble." The old content is easier with one technology than with the other. On the "don’t lightly scrap" principle, we might reasonably choose the Supposer. On the "this technology has tremendous potential if we tailor the curriculum to what it does best" principle, we might follow a different path. Advantages (and pitfalls) lie in each direction. Fortunately, it seems quite reasonable to suppose that the depth and coherence and thought (habits of mind) with which a piece of content is pursued matters far more than the particular choice of that content. New technologies, because they’re different, get us to rethink things, sometimes to good effect.

references (Not all used in paper-to-date)

Corbitt, M.K. (compiler and editor) and NCTM Conference Steering Committee. 1985. The Impact of Computing Technology on School Mathematics: Report of an NCTM Conference. Mathematics Teacher, 78(4):243–250

Cuoco, A. Constructing the Complex Numbers. 1997. Int. J. Computers in Mathematics Educ. 2:155-186.

Cuoco, A. and E. P. Goldenberg. 1996. A role for technology in mathematics education. Journal of Education, 178(2):15-32.

Cuoco, A., Goldenberg, E.P., and Mark, J. 1996. Habits of mind: an organizing principle for mathematics curriculum" Journal of Mathematical Behavior. 15(4):375-402. December, 1996.

Fey, J. T., Heid, M. K., Good, R. A., Sheets, C., Blume, G. W., and R. M. Zbiek. 1995. Concepts in Algebra. Chicago: Everyday Learning Corporation.

Goldenberg, E. P. 1988. Mathematics, Metaphors, and Human Factors. J. Math. Behav., 7, pp. 135-173.

Goldenberg, E. P. 1991. A Mathematical Conversation With Fourth Graders. Arithmetic Teacher, 38(8):38-43.

Goldenberg, E. P. 1992. The difference between graphing software and educational graphing software". In Demana, F., and B. Waits, (eds.), Proceedings of the Second Annual Conference on Technology in Collegiate Mathematics. Addison-Wesley, 1991; co-published in Zimmerman, W., & S. Cunningham, (eds.) Visualization in Mathematics, Math. Assoc. of America.

Goldenberg, E. P. 1996. ‘Habits of mind’ as an organizer for the curriculum. Journal of Education,178(1):13-34.

Maier, G. 1983. We have a choice. Mathematics Teacher, 76(6):386–387