Date: Nov 24, 2012 3:38 PM
Author: Graham Cooper
Subject: Re: A HARD FLAW in Godel's Proof
On Nov 25, 1:13 am, George Greene <gree...@email.unc.edu> wrote:
> On Nov 18, 9:45 pm, Graham Cooper <grahamcoop...@gmail.com> wrote:
> > On Nov 19, 11:32 am, forbisga...@gmail.com wrote:
> > > On Sunday, November 18, 2012 1:08:52 PM UTC-8, Graham Cooper wrote:
> > > > On Nov 19, 1:14 am, forbisga...@gmail.com wrote:
> > > > > On Sunday, November 18, 2012 12:46:17 AM UTC-8, Graham Cooper wrote:
> > > > > > > On Nov 17, 10:10 pm, "INFINITY POWER" <infin...@limited.com> wrote:
> > > > > > > > STEP 1: DEFINE a 2 parameter predicate DERIVE(THEOREM, DERIVATION)
> > > > > > > > DERIVE(T,D) is TRUE IFF
> > > > > > OK so the T/F PREDICATE
> > > > > > DERIVES(T,<t1, t2, t3, t4,,,,T>)
> > > > > > is easy to program!
> > > > > > ...As long as D is a given argument, for now.
> > > > > And always,
> > > > D is a finite length string, all the terms in D are from a fixed
> > > > alphabet or atleast countable.
> > > > The HYPOTHESIS which goes against "G=!proof(G)" being significant
> > > > is that:
> > > > for some suitably rich set of Axioms,
> > > > for every well formed formula F
> > > > exist <t1,t2,t3,,,,F>
> > > > or exist <t1 t2 t3,,,~F>
> > > or neither, that is the Formula is well formed but neither
> > > it nor its negation is derivable from the axioms.
> > > > which would imply the existence of a halting thoerem decider.
> > > > Though it's complexity might be exponential anyway.
> > > Most computer programs can easily be proven to halt or not
> > > given well defined input.
> > > But what you are trying to do can't be done. You might want
> > > to read
> > >http://en.wikipedia.org/wiki/Busy_beaver
> > I know about Busy Beaver.
> No, you don't.
> > I PROGRAMMED bb(n)
> No, you didn't, because if you had, you would be smart enough to do
> of the following, which would create a program that can't logically
> This is taken directly from the Wikipedia article:
> Suppose that bb(n) is a computable function and let BB denote a TM,
> evaluating bb(n)
> (given a tape with n 1s as input, BB will produce bb(n) 1s on the tape
> and then halt).
> Let Increment denote a Turing machine that increments its argument in
> (given a tape with n 1s as input, Increment will find the first 0,
> replaces it with a 1, and then halt).
> Let Double denote a Turing machine evaluating the function n + n
> (given a tape with n 1s as input, Double will produce 2n 1s on the
> tape and then halt).
> If all these TMs exist then THERE MUST NECESSARILY ALSO exist a TM
> them in the order Double | BB | Increment . Call this machine GC
> and let gc be the number of states of GC.
> Let Create_gc denote a Turing machine creating gc 1s on an initially
> blank tape. This machine may be constructed in a trivial manner to
> have gc states (the state i writes 1, moves the head right and
> switches to state i + 1, except the state#gc, which halts). Let N
> denote the sum gc + gc.
> Let BadB denote the composition Create_gc | Double | BB | Increment
> (or BadB = Create_gc | GC ). Notice that this machine has N states
> (gc from create_gc and gc more from GC).
> Starting with an initially blank tape it first creates a sequence of
> gc 1s and then doubles it, producing a sequence of N 1s.
> Then BadB calls BB to produce bb(N) 1s on the tape, and at last GC
> will add 1 more 1 to the end of this string and then halt.
> But the phase of incrementing will continue at least bb(N) steps, so
> the time of working of BadB is strictly greater than bb(N),
> which contradicts to the definition of the function bb(n).
> > Same Bullshit that George spouts all day here.
> None of the constructions above is bullshit. It is a fact about TMs
> that they can call other TMs without needing a whole lot
> of extra states to get that done. It has been known since 2001 that
> there is, if you restrict yourself to the 2-symbol alphabet 0,1 used
> here, a universal TM needing only 19 states. If you are willing to
> add a 3rd symbol (not for any of the TMs being simulated but for the
> simulator, stringing them together) then you can have a universal TM
> in only 10 states.> --
> > S: if stops(S) gosub S
> > G. GREENE: this proves stops() must be un-computable!
> Well, yes, of course it does. &I repeat, that IS NOT MY result.
> LOTS of people can see this. The fact that you are not one of them
> means you need to pick a different field.
> We don't just SEE it, by the way. WE DERIVE A CONTRADICTION from the
> assumption that Stops() IS computable.
> The mere fact that YOU WROTE THIS PROGRAM means that you just PROVED
> TO YOURSELF the following conditional:
> If Stops() is computable then S is computable.
> But if S is computable and Stops () REALLY MEANS "the argument program
> stops", then YOU have a contradiction.
> So why are you denying a contradiction that you yourself have just
What's your point?
You argue logically until you're shown wrong, then
quote scripture as backup.
You have never supported one single assertion in 10 years on your own.
Keep starting out that window, and give other people a chance to
contribute before you trash every single thread in sci.logic with
scripture. None of it holds up, PERIOD.