Drexel dragonThe Math ForumDonate to the Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.

Math Forum » Discussions » sci.math.* » sci.math

Topic: Number of variables needed to axiomatize first order predicate calculus?
Replies: 6   Last Post: Mar 15, 2013 11:25 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Graham Cooper

Posts: 4,351
Registered: 5/20/10
Re: Number of variables needed to axiomatize first order predicate calculus?
Posted: Mar 15, 2013 4:46 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On Mar 13, 5:54 pm, George Greene <gree...@email.unc.edu> wrote:
> On Mar 8, 6:02 pm, Frederick Williams <freddywilli...@btinternet.com>
> wrote:

> > It is well known that at least three variables are needed to axiomatize
> > sentential calculus, by which I mean that at least three letters are
> > needed to write down the axioms and rules.

> Come ON!  You can't take your own question seriously until
> and UNLESS YOU FIRST *specify* a *specification*LANGUAGE*!!
> The specification-language you used to specify the axioms of
> propositional calculus
> is one that NO ONE HAS *EVER* seen used IN ANY other context!
> It basically just copies the grammar of the underlying language AS
> namely first-order, without ANY PRETEXT of explication!  It
> universally quantifies
> over the things that it is trying to "specify", even though universal
> quantification
> is EXPLICITLY PROHIBITED in the object-language and even though this
> specification
> language itself does NOT explicitly contain any quantifier-symbols!
> It is EASY to
> "get small" if you just LEAVE OUT WHOLE CHUNKS of explication of what
> you
> are doing!
> The use of ","(comma) and "/"(slash) for clarifying that from some
> finite collection
> of input sentences, you can infer some other finite collection of
> output sentences,
> IS (thank god) used elsewhere and I suppose that could be tolerated as
> being a
> paradigm-wide linguisitic convention for the specification of logics-
> as-collections-
> of-rules-in-general ANYway.  You could certainly begin this quest by
> saying that you
> would like to state all the inference rules of first-order logic using
> this comma/slash
> framework.
> But in general it is just going to be MUCH HARDER because arguably THE
> important inference rule in FOL is NOT an inference rule simpliciter,
> but RATHER
> a META-inference rule, or, literally, a SECOND-ORDER*FIRST-ORDER*-
> inference-rule,
> namely, UNIVERSAL GENERALIZATION, in which you infer a sentence in FOL
> (a universally
> quantified sentence) NOT from some prior collection of sentences but
> rather FROM A *PROOF*!
> Basically, from [ B |- phi(x) ], where B is a finite set of sentences
> with x not occurring in them,
> you infer Ax[phi(x)].  The mere DARING of this NESTING of "|-" -- not
> to mention, HOW MANY
> VARIABLES does it take TO EVEN STATE the non-occurrence clause?? --
> the rule could
> basically be written [ B |- phi(x) ] |- Ax[phi(x)]  -- oh, and HOW
> take to explain what x *as*parameter*of*phi(.)* means, as OPPOSED to
> what variables
> mean in other contexts -- remember, in your propositional/0th-order
> treatment, variables
> ONLY meant PROPOSITIONS -- is certainly BEYOND the pale of ANY
> specification language that you might have been trying to use FOR ANY
> OTHER purpose!
> If the language in which you are doing this is not CONSTANT (or at
> least constant in
> paradigm or parametric profile) across the various languages&levels-of-
> logics that
> you are investigating, then "the number of variables it takes" is just
> not going to be
> meaningful as a metric.  Oh, yeah, and then you also have to ask how
> many variables
> it takes to explain what kind of thing "phi(.)" is.  Phi(.) is after
> all ITSELF a universally
> quantified VARIABLE.

Eventually you should arrive at a n-OL logic with syntax complex
enough to encapsulate n-OL.

But 2OL needn't be MORE complex SYNTAX, just fewer restrictions.

e.g. lowercase = terms, uppercase = variables

add( M, 0, M )
add( M, s(N), (S) ) <- add( M, N, S ).

P(s(0)) <- ( P(X) -> P(s(X) ) & P(0) )

this is HIGH ORDER PROLOG where predicates may be VARIABLES and match
to generic prolog clauses. e.g. P could match with notsmallest(X),
where notsmallest(X)->notsmallest(X+1)


Now all the AXIOMS of the theory may be written in a SUBSET of the
full 1OL.

e.g. HORN CLAUSES using only

p1(...) & p2(...) & p3(...) -> pa(...)


What you want to look for is a SUBSET of the (1OL) language that can
perform some computational feat - e.g. be Turing Machine Equivalent.

Horn Clauses have this property since

UNIFY( f(a,b,c...) , f2(z,y,x,...) )

which finds a COMMON predicate between 2
is Turing Equivalent.

Then this computation method must be utilised to EXPAND on the limited
2OL syntax.


The way I do this is using Naive Set Definition to specify which
variable in each predicate a SET is referencing.

e( A, evens ) :- even(A).

*this example is simple, even() only has one argument.

Then I can process ALL() quantifiers via Set References, rather than
explicitly stating which term to Quantify over in Quantified Logic
style.. A(x), p(...x...) >=...
which is rather messy with partial formula insides hanging out like


So yes you can INCREASE the complexity of 2OL syntax, but it's going
to get convoluted with string processing and looking inside predicate
types, or you can use a subset of 1OL syntax to write the axioms which
work on a simpler processing system which can still do the computing
work necessary.

e.g. Not All Numbers Are Evens

~A(X) e( X, evens )

I bypass referencing internals by using Set Referencing

!(ss( nats, evens ))

Not all members of nats are members of evens.


which is in Horn Clause format same as 2OL axioms and computable!

!(ss( S1, S2 ) <- intersect( S1, !(S2)).

A 2OL Axiom.

(which will return the 1st ODD when invoked)



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum 1994-2015. All Rights Reserved.