Date: Mar 15, 2013 11:25 PM Author: Graham Cooper Subject: Re: Number of variables needed to axiomatize first order predicate calculus? On Mar 16, 10:55 am, me <me154...@gmail.com> wrote:

> On Mar 15, 1:46 am, Graham Cooper <grahamcoop...@gmail.com> wrote:

>

>

>

> > On Mar 13, 5:54 pm, George Greene <gree...@email.unc.edu> wrote:

>

> > > On Mar 8, 6:02 pm, Frederick Williams <freddywilli...@btinternet.com>

> > > wrote:

>

> > > > It is well known that at least three variables are needed to axiomatize

> > > > sentential calculus, by which I mean that at least three letters are

> > > > needed to write down the axioms and rules.

>

> > > Come ON! You can't take your own question seriously until

> > > and UNLESS YOU FIRST *specify* a *specification*LANGUAGE*!!

>

> > > The specification-language you used to specify the axioms of

> > > propositional calculus

> > > is one that NO ONE HAS *EVER* seen used IN ANY other context!

> > > It basically just copies the grammar of the underlying language AS

> > > ASSUMED

> > > and then INTRODUCES A CONCEPT from a COMPLETELY HIGHER ORDER,

> > > namely first-order, without ANY PRETEXT of explication! It

> > > universally quantifies

> > > over the things that it is trying to "specify", even though universal

> > > quantification

> > > is EXPLICITLY PROHIBITED in the object-language and even though this

> > > specification

> > > language itself does NOT explicitly contain any quantifier-symbols!

> > > It is EASY to

> > > "get small" if you just LEAVE OUT WHOLE CHUNKS of explication of what

> > > you

> > > are doing!

>

> > > The use of ","(comma) and "/"(slash) for clarifying that from some

> > > finite collection

> > > of input sentences, you can infer some other finite collection of

> > > output sentences,

> > > IS (thank god) used elsewhere and I suppose that could be tolerated as

> > > being a

> > > paradigm-wide linguisitic convention for the specification of logics-

> > > as-collections-

> > > of-rules-in-general ANYway. You could certainly begin this quest by

> > > saying that you

> > > would like to state all the inference rules of first-order logic using

> > > this comma/slash

> > > framework.

>

> > > But in general it is just going to be MUCH HARDER because arguably THE

> > > MOST

> > > important inference rule in FOL is NOT an inference rule simpliciter,

> > > but RATHER

> > > a META-inference rule, or, literally, a SECOND-ORDER*FIRST-ORDER*-

> > > inference-rule,

> > > namely, UNIVERSAL GENERALIZATION, in which you infer a sentence in FOL

> > > (a universally

> > > quantified sentence) NOT from some prior collection of sentences but

> > > rather FROM A *PROOF*!

> > > Basically, from [ B |- phi(x) ], where B is a finite set of sentences

> > > with x not occurring in them,

> > > you infer Ax[phi(x)]. The mere DARING of this NESTING of "|-" -- not

> > > to mention, HOW MANY

> > > VARIABLES does it take TO EVEN STATE the non-occurrence clause?? --

> > > the rule could

> > > basically be written [ B |- phi(x) ] |- Ax[phi(x)] -- oh, and HOW

> > > MANY VARIABLES does it

> > > take to explain what x *as*parameter*of*phi(.)* means, as OPPOSED to

> > > what variables

> > > mean in other contexts -- remember, in your propositional/0th-order

> > > treatment, variables

> > > ONLY meant PROPOSITIONS -- is certainly BEYOND the pale of ANY

> > > specification language that you might have been trying to use FOR ANY

> > > OTHER purpose!

> > > If the language in which you are doing this is not CONSTANT (or at

> > > least constant in

> > > paradigm or parametric profile) across the various languages&levels-of-

> > > logics that

> > > you are investigating, then "the number of variables it takes" is just

> > > not going to be

> > > meaningful as a metric. Oh, yeah, and then you also have to ask how

> > > many variables

> > > it takes to explain what kind of thing "phi(.)" is. Phi(.) is after

> > > all ITSELF a universally

> > > quantified VARIABLE.

>

> > Eventually you should arrive at a n-OL logic with syntax complex

> > enough to encapsulate n-OL.

>

> > But 2OL needn't be MORE complex SYNTAX, just fewer restrictions.

>

> > e.g. lowercase = terms, uppercase = variables

>

> > 1OL

> > add( M, 0, M )

> > add( M, s(N), (S) ) <- add( M, N, S ).

>

> > 2OL

> > P(s(0)) <- ( P(X) -> P(s(X) ) & P(0) )

>

> > this is HIGH ORDER PROLOG where predicates may be VARIABLES and match

> > to generic prolog clauses. e.g. P could match with notsmallest(X),

> > where notsmallest(X)->notsmallest(X+1)

>

> > -------------------------------

>

> > Now all the AXIOMS of the theory may be written in a SUBSET of the

> > full 1OL.

>

> > e.g. HORN CLAUSES using only

>

> > p1(...) & p2(...) & p3(...) -> pa(...)

>

> > -----------------------------

>

> > What you want to look for is a SUBSET of the (1OL) language that can

> > perform some computational feat - e.g. be Turing Machine Equivalent.

>

> > Horn Clauses have this property since

>

> > UNIFY( f(a,b,c...) , f2(z,y,x,...) )

>

> > which finds a COMMON predicate between 2

> > is Turing Equivalent.

>

> > Then this computation method must be utilised to EXPAND on the limited

> > 2OL syntax.

>

> > -------------------------

>

> > The way I do this is using Naive Set Definition to specify which

> > variable in each predicate a SET is referencing.

>

> > e( A, evens ) :- even(A).

>

> > *this example is simple, even() only has one argument.

>

> > Then I can process ALL() quantifiers via Set References, rather than

> > explicitly stating which term to Quantify over in Quantified Logic

> > style.. A(x), p(...x...) >=...

> > which is rather messy with partial formula insides hanging out like

> > that!.

>

> > ---------------------------

>

> > So yes you can INCREASE the complexity of 2OL syntax, but it's going

> > to get convoluted with string processing and looking inside predicate

> > types, or you can use a subset of 1OL syntax to write the axioms which

> > work on a simpler processing system which can still do the computing

> > work necessary.

>

> > e.g. Not All Numbers Are Evens

>

> > ~A(X) e( X, evens )

>

> > I bypass referencing internals by using Set Referencing

>

> > !(ss( nats, evens ))

>

> > Not all members of nats are members of evens.

>

> > -------------------------

>

> > which is in Horn Clause format same as 2OL axioms and computable!

>

> > !(ss( S1, S2 ) <- intersect( S1, !(S2)).

>

> > A 2OL Axiom.

>

> > (which will return the 1st ODD when invoked)

>

> > Herc

>

> > --www.BLoCKPROLOG.com

>

> Oix

>

IOW, its quite possible to embed a more complicated grammar

in a simpler grammar!

Then you can use a computer to evaluate the language..

LOGIC SYSTEM (wff - 10L)

--------------------------------------

HORN CLAUSE (reverse inference rules - 2OL)

======================

3GL INTERPRETER (+ Database)

________________________

COMPUTER

Then you can merely TRANSLATE the wff

to any Logic Language you like.

INPUT> ALL(N):EVENS N e NATS ?

V

<PRE-PARSER>

V

PROLOG> ss( EVENS, NATS)

PROLOG> e(@A, EVENS) , e(A, NATS)

PROLOG> YES

V

<POST_PARSER>

V

RESULT> ALL(N):EVENS N e NATS

Parsing into QUANTIFIER LINGO is not required for problem solving

though.

Herc

--

www.BLoCKPROLOG.com