In article <email@example.com>, "R. Srinivasan" <firstname.lastname@example.org> wrote:
> > You can do this without thinking about the set of all irrational > > numbers, just as you can prove the square root of two is irrational > > without thinking about the set of all rational numbers (heck, Euclid > > pulled that one off). > > No, he didn't. You are talking about Euclid's proof by contradiction > where he starts with the hypothesis "Let the square root of two be a > rational number r = m/n, where m and n are integers that are mutually > prime" and then arrived at a contradiction. Note that r is an > *arbitrary* rational number (which can equal any member of the infinite > class of rationals). What has been demonstrated is that in a given > enumeration of the rationals, the square root of two is not equal to > the first rational, and if it is not equal to the n'th rational, it is > not equal to the n+1 th rational. More generally, the square root of > two does not belong to the class of all rationals.
That's the way you and I and everyone since Cantor interpret it, but it is most certainly not the way Euclid saw it, and not the way Euclid wrote it. Elsewhere in this thread I mentioned what we now call Euclid's proof of the infinitude of primes. We say there are infinitely many primes, but Euclid deliberately avoided saying any such thing - he just said that given any prime, there's a bigger one. To you and to me, that's the same thing as saying there are infinitely many primes, but to Euclid, there was a world of difference.
> > You can (Norm says) work with functions by knowing what kind of thing > > is an input, what kind of thing is an output, and what the rule is, > > without ever thinking of the set of all inputs. If you can show that > > any rule that takes an integer as an input and gives a computable > > number as an output must omit some computable number, then you have > > a proof that computable numbers are not countable, and you've done it > > without using set-theoretic constructions. > [...] > > "What kind of thing is an input", "what kind of thing is an output", > "takes an integer as input" all have to be precisely defined. For > example, what do you mean by "takes an integer as input"? Takes some > specific integer? Or do you mean "zero is taken as input and if the > n'th integer is taken as input (in some enumeration of the integers) > then the n+1 th integer is also taken as input"? The latter clearly > defines an infinite class of integers even if we don't explicity name > this class.
It defines an infinite set of integers to you and to me because we are conditioned to see it that way. Before Cantor, no one had any trouble seeing n^2 + 1 as something that took an integer input and gave an integer output, without the thought of an infinite set ever occurring to them. They had no problem working with and proving theorems about n^2 + 1 without explicitly considering the set of all values of n^2 + 1.
-- Gerry Myerson (email@example.com) (i -> u for email)