In any case, I think this path, of proving the mathematical
underpinnings and/or pedigree and/or resume of the
computer programming languages had to be securely established as a justification or litmus test for introducing
them in mathematics classroom. I consider that whole
line of thinking somewhat bogus.
Robert was saying (paraphrase): just because algebra books go: f(x, y) = x + y doesn't mean a computer
language's def f(x, y): return x + y is relevant because
the former represents some high minded mathematical
entity while latter is "just a subroutine" and is really about talking to the guts of a computer and so is not
appropriately "mathematics" at all.
I pointed out then and point out again that we long ago accepted that "function keys" on a calculator are close enough to what's in a math book to allow cos, sin,
tan, exp, log and many more without all these justifications and caveats or proofs that calculators had a right to advertise the "mathematical" nature of
As it happens, on a computer we might want to compute (1 + 1/n) to the nth power to more
significant digits than a Casio or Sharp or TI allows, say to 300 decimal places or so, just for
the fun of it. Lets set our computation to 500 significant decimal digits. Is such innocent play to be discouraged haughtily discouraged by snooty "math purists" because (1 + 1/n) to the n is only "math" if done on paper or if done with a
calculator, but is "not math" if done on some bastard-child-of-mathematics "computer language"? I think not.
On a computer, we might program a Taylor Expansion of something, actually do some computations with it out to 30 or so terms in a looping construct, whereas on a calculator we would find this tedious
unless, drum roll, it were a *programmable* calculator. Tiny screen, tiny language, but hey, we're really computer programming now (a calculator with a
programming language is just a special purpose
computer in my book).
So why not allow (nay encourage) a bigger screen and a more capable general purpose language?
Because the latter might not be "mathy" enough?
That just sounds like a completely lame excuse to my ears and I see no need to justify introducing a more capable-than-calculators technology on the basis of BNF or lambda calculus or whatever. I see no need for such abstruse and esoteric
a rationale, to defend the relevance of computing
technology to the math classroom. Computing
technologies go way back in mathematics, to at least the abacus (3000 BC). I think it's a bit
late in the game to be erecting artificial walls
against computer languages here in 2014, as
"not mathematical enough".
Which is why I never lent much credence to these "history of computer science" arguments as a justification for such a move. Robert turned out to be supportive if we just finished Algebra first,
before turning to such texts as the Litvins "Mathematics for the Digital Age" (which includes dot notation).
So it was never really about "subroutines" not being
"mathematical" enough. That turned out to be a red herring, and therefore so is much of this thread,
in terms of the argument it proposes to be
We don't really need a lot of esoteric "history of computer science" justifications of this nature, this late in the game. Common sense will do for the most part, in this context.