email@example.com (Matthew P. Wiener) says...
>>Not if a machine capable of PM (physical mathematics) and no more were >>*more* complicated than one capable of beyond-PM reasoning. I think that >>is very likely to be the case. > >Why is that?
As I said, because more general tools are often simpler than special-purpose tools.
>>Goedelian arguments *don't* apply, as far as I can see, in either the TM >>or non-TM case. > >Uh, you asked me how *my* arguments escape from applying to a non-TM-mind >model. That you would prefer some *other* proof is your business--it has >absolutely nothing to do with whether my argument has the subtle flaw that >you insinuated. Sheesh.
Well, you have two arguments, the first of which is that beyond-Peano Arithmetic mathematical ability has no survival advantage, so it is puzzling that it would evolve. It is just as puzzling for non-TM minds as it is for TM-minds.
The other argument was the Goedelian argument. In my opinion, it serves no purpose---if the first argument is correct, then the second argument is redundant. If the first argument is incorrect, then the second argument is inapplicable. While it is true that Goedelian incompleteness doesn't apply to non-TM minds, it is also irrelevant to your conclusions about why TM minds should not evolve to be capable of ZFC.
The Goedelian argument seems to be this: (once again letting A be the starting theory, and C the limit theory)
1. C must be consistent, because inconsistencies are weeded out. 2. A can formalize this reasoning, so A can prove C consistent. 3. Therefore, if A knows an index for C, then A must (by Goedel's theorem) be at least as powerful as C, since it can prove C's consistency. 4. But why should A know an index for C? Because A already contains everything that is physically relevant, so C will be the same as A (because evolution won't add anything new that is not physically relevant), so C's index is the same as A's. 5. Therefore, A is as powerful as C.
Step 3 is only possible if step 4 works. But if step 4 works, then we can just skip steps 2 and 3. Conclusion 5 *still* follows.
The Goedelian argument seems a complete irrelevancy.
>>What is relevant is this: was there a plausible alternative >>design for human brains (TM or otherwise) that would have resulted in >>completely satisfactory survival abilities but would not have allowed the >>contemplation of ZFC? [...] > >My argument is not about all possible mechanisms.
Sorry for not being clear. What I meant was this: If there is a design for a TM mind that is (a) adequate for survival, and (b) simple enough to have evolved in the first place, and (c) incapable of doing set theory, then I expect that that is what would have evolved, instead of a TM mind that is capable of doing set theory. However, I don't know of such a design. As I said in another post, if the TMs have to compete with each other, I don't think you can definitely give an upper bound to how much reasoning is needed to be a survival advantage.
We can say the same thing about non-TM minds. If there were a less-powerful mind that was easier to evolve and just as good at survival, then that would have likely evolved instead of us.
Either in the TM or the non-TM case, whether evolution stopped at something simpler or not depends on whether anything simpler was available and adequate for survival.,
>>But you didn't offer a plausible alternative TM that would be less >>powerful than our brains. You suggested Peano Arithmetic, but that isn't >>really plausible; Peano arithmetic is both too powerful for survival >>purposes and not powerful enough. > >It's a first approximation. You can use some other theory if you like.
I think it likely that the intelligence that humans actually have is about what is needed for survival in competition with the elements and other humans. I think that ability to do ZFC is a side-effect, just like the ability to juggle. No survival advantage, but it follows from other abilities that do have a survival advantage.
>>I don't >>know, for sure, but mathematics is not a direct concern. It is possible >>(even quite likely) that mathematical ability would be a side-effect >>of whatever program allows our TMs to survive, but the selected-for >>TM would not be principally a mathematician. > >You mean you consider it "quite likely" that pattern recognition and >other nasty AI problems are going to be solved using the transfinite?
No. I mean that the ability to do transfinite mathematics is like being able to juggle or play the piano---it has no survival advantage, but it is a side-effect of skills that *do* have survival advantage.