
§ 422 What is a definition of a number?
Posted:
Jan 22, 2014 4:21 AM


The definition of a number must allow transmitter and receiver in mathematical discourse, dialogue, and monologue to identify this one (1) number uniquely.
If the question was: "How can *we* define a number?, then the answer could only be: "A number can be identified by a finite string of symbols taken from an uncountable alphabet". There are many ways to do so. Every definable number has infinitely many finite definitions. Most are known for the number zero or 0 or 0.000..., because in addition to the three finite definitions just given there are lots of sequences with improper limit oo, each of which has a sequence of reciprocals with limit 0.
The set of finite strings, however, is countable. In order to get a set of uncountably many numbers, infinite strings of symbols are required (because uncountable countable lists are impossible, but unlistable alphabets cannot be learned or applied, i.e., uncountable alphabets are not alphabets). Such an infinite string must be capable of uniquely defining a number. It is not enough to distinguish the number from all its finite approximations like 1/9 = 0.111... can be distinguished from all its finite approximation 0.1, 0.11 and so on. That would require that 1/9 is already "given", of course by a finite definition. Infinite definitions cannot "be given". A distinction by digits is impossible because all digits belong to the set of finite approximations. It is only the property of being "nonterminating" that distinguished 1/9 uniquely from all its approximations, but this property cannot be obtained from checking any digits but only from the finite definition. (An always negative result with always infinitely many further digits remaining to check can only be accepted as a final exclusion in an experimental science like physics. Mathematics requires final proofs!)
Although it is clear from the above argument, that a number cannot be defined by an infinite sequence of digits, it can be proved in addition that the set of all infinite sequences of digits is countable. For this proof consider the set of all infinite sequences of symbols, or, without loss of generality, the Binary Tree and the uncountable set of real numbers in the unit interval given as paths. All infinite bit sequences that in a unique way define a real number of the interval (some number having even two such sequences) are paths in the Binary Tree. All possible paths that are defined by nodes only will be covered when each node is covered by at least one infinite path containing it. Since the number of nodes is countable, this is accomplished by countably many paths. More paths cannot be defined by nodes. But even covering every node by countably many paths would not result in using more than countably many paths.
This excludes acceptance of an uncountable set of numbers in any mathematical theory that is free of contradictions.
Regards, WM

