Date: Feb 18, 2013 12:45 AM Author: fom Subject: Re: distinguishability - in context, according to definitions On 2/17/2013 8:40 PM, Barb Knox wrote:

> In article <x_-dnZNsYePggrzMnZ2dnUVZ_rydnZ2d@giganews.com>,

> fom <fomJUNK@nyms.net> wrote:

>

>> On 2/17/2013 9:10 AM, Shmuel (Seymour J.) Metz wrote:

>>> In <WvKdnStB4bTi9YDMnZ2dnUVZ_oWdnZ2d@giganews.com>, on 02/14/2013

>>> at 04:42 PM, fom <fomJUNK@nyms.net> said:

>>>

>>>> Here are descriptions of the received paradigm

>>>> for use of the sign of equality

>>>

>>> They don't clarify the sentence I asked about. How are two distinct

>>> sequences ontologically the same, even if both are eventually

>>> constant? They can certainly have the same limit, but that is a

>>> different matter.

>>>

>>

>> I am sorry. Your objection to the statement is

>> clear to me now. My statement badly expressed

>> what was intended.

>>

>> There is a distinction in identity statements

>> between

>>

>> trivial, or formal, identity

>>

>> x=x

>>

>> and informative identity

>>

>> x=y

>>

>> In the latter case, there is a distinction between

>> when it is stipulative and when it is licensing

>> epistemic warrant.

>>

>> The algebraic proof licenses the epistemic

>> warrant for the substitutivity of the

>> symbols.

>>

>> But, in the received paradigm for identity taken

>> from first-order predicate logic, all instances

>> of

>>

>> x=y

>>

>> are stipulative.

>>

>>

>> This is not how I understand mathematics. It

>> is something I strive to reconcile with my

>> understanding of matters -- as meager as that

>> may be.

>>

>> Almost every reputable mathematics department is

>> giving courses in "mathematical logic," presumably

>> based on this received paradigm.

>>

>> There is nothing the matter with the deductive

>> calculus. So long as the semantic unit is a proof

>> with quantificationally closed assumptions and

>> quantificationally closed conclusions, one may

>> speak of faithful representation in the algebraic

>> sense.

>>

>> But, in the "logical" sense,

>>

>> 1.000... = 0.999...

>>

>> is merely a stipulation of syntactic equality

>> between distinct inscriptions that is prior

>> to any mathematical discourse.

>

> I don't see how. That equality is *proven* from axioms for real

> numbers, so how can it be a prior stipulation? The prior stipulations

> are that place-value notation represents an infinite series, and that

> "..." indicates all subsequent digits are the same.

Just to be clear. I do not like the received paradigm.

At what point is representation assigned to symbols? That would

be model theory. When we learn about model theory, we never think

about all of the instances of informative identity that must be

satisfied in order for a set of well-formed formulas to be

satisfied.

In the history of the received paradigm, there are several lines

of thought that are interwoven. Whereas Penelope Maddy asks whether

one should believe the axioms, I ask whether one should believe the

pretense of logicism.

You can find "the standard account of identity" at

http://plato.stanford.edu/entries/identity-relative/#1

Personally, I do not like them referring to stipulative informative

identity as Leibniz' law. Leibniz did not express the law this

way. Nor was his logic extensional.

The book from which I learned logic has a simple representation

for deductions. Stroked formulas require discharge at the end

of a deduction. That is what I will use here.

In that book, the reflexive axiom is implemented by simply

writing down the identity. As an axiom of identity, it

will not require discharge. Thus, when one wants to

say,

"Let a and b be such that not a=b."

One has in the derivation,

a=a

b=b

|-(a=b)

For the case in question, one has

0.999...=0.999...

1.000...=1.000...

|1.000...=0.999...

since the assertion is that the two

symbols are equal.

The first objection to this contrast is that

one is that the first example is using variables

and the latter is using constants.

Let me first object to calling the letters

of the first expression "variables". They are,

for purposes of derivation conveying the definiteness

of a denotation without actually denoting

definitely. Russell tried to account for this

difference in the first edition of "Principia

Mathematica" with "real" and "apparent" variables.

Apparent variables are variables withing the scope

of quantifiers. The literature is returning to

this distinction in its examination of "bare

quantification".

For my part, I prefer to call those variables

"parameters." In this case, they are parameters

because of the use of the reflexive axiom of

identity within the context of a derivation.

Before discussing the constants, let me

observe that you are correct about what

you are calling prior stipulation relative

to working within the axioms for the real

number system. If you read the link above,

you will find that they speak of "quotient

models." These are term models in which

instances of stipulative informative identity

generate equivalence classes. The "individuals"

of the quotient models are, in fact, the

classes of terms that correspond with the

prior stipulations.

This had been addressed in the original post:

> So, that we clarify the nature of the received

> paradigm in this matter, we address the issue

> of uniform semantic interpretation of inscriptions

> by invoking Carnap's notion of syntactic equality.

> Hence, what is expressed by

>

> 0.999...=1.000...

>

> is the identity of two equivalence classes

>

> [0.999...]=[1.000...]

>

> relative to which some quotient model must

> be formed to accommodate the fact that an

> ontological assertion is being made using

> an informative identity.

>

> Because the received paradigm does not

> address informative identity directly, it

> is possible that there are interpretations

> in which

>

> [0.999...]=[1.000...]

>

> is false.

Thanks to another thread in sci.math I have

now learned how to be careful about distinguishing

stipulative informative identity from epistemic

informative identity.

The statement above should read something more

along the lines of "Because the received paradigm

only addresses informative identity as

stipulative,..."

As for your assumption concerning the axioms

for real numbers, the original post does state

the context from which those axioms actually

are derived,

> Consider the construction of the real

> numbers in relation to Dedekind cuts. To

> accept this construction is to accept the

> fact that in a hierarchy of definition, there

> is information about the defined system that

> is epistemically prior by virtue of the

> construction.

And, the first paragraph states clearly,

> It is an exercise in "basic" logic.

The real problem with my post is that none of

us really have paid attention to what it means

for mathematics to be "logical" in the sense

inherited by the history of foundational research.

I include myself in that statement because of

a prior time when I had an idea but was reamed

by people with training in philosophy.

I have taken a little time to educate myself.

The issue of "constants" becomes terribly complex.

Frege introduced the idea of semantic completion for

expressions. So,

x+3=5

has no truth value, but

2+3=5

does have a truth value.

His analysis of identity statements led to a theory

of names based on descriptions. But, if reports are

correct, few people paid attention to Frege until Russell

brought attention to his work. However, Russell was

unsatisfied with the Fregean analysis. He introduced

his own description theory that circumvented a problem

called "presupposition failure". He then used his ideas

from the paper "On Denotation" to formulate the "no classes"

foundation of mathematics in the first edition of "Principia

Mathematica."

His foundational theories took naming to be an extra-logical

function. His notion of individual is that of a term in

grammatical statements. He represented Liebniz' law grammatically

as in the link above. And, he took seriously Wittgenstein's

rejection of Leibniz' law. Tarski accommodated all of this

in his correspondence theory of truth for classes.

And, the result is that the modern implementations of set theory

do not necessarily reflect the intent of Zermelo. Zermelo's

1908 paper clearly treats the sign of equality in the sense of

identity between denotations.

There had been no serious challenge to Russellian description

theory until Strawson in the middle of the 20th century.

Since then, there has been a great deal of work on descriptions,

but it has not influenced model theory for mathematics.

The paper to look at that actually does talk about this is

Abraham Robinson's "On Constrained Denotation." His discussion

of the model diagonal with respect to denotations returns the

idea originally expressed by Frege,

"We still have to clarify the role of

identity. One correct definition of

the identity from the point of view

of first-order model theory is undoubtedly

to conceive of it as the set of diagonal

elements of MxM, i.e., as the set of

ordered pairs from M whose first and

second pairs coincide. The symbol "="

then denotes this relation and it is

correct that (M |= a=b) if "a" and "b"

are constants which denote the same

individual in M, or, more generally,

that (M |= s=t) if "s" and "t" are terms

which denote the same individual in

M. But, the identity may also be

*introduced* by this condition so that

(M |= s=t), *by definition* if "s"

and "t" denote the same individual

under the correspondence C, which is

again assumed implicitly, and this

seems more apposite in connection

with the discussion of sentences which

involve both descriptions and

identity."

So, it would seem that there is a received view,

a discarded view, and a contrarian view on how one

should treat identity.

Here is one thing to consider. If one is interested

in proofs that begin with quantified statements

and that end with quantified statements, then the

presumption is that every constant used in the proof

is definable relative to a description. It is not

clear to me that a model theory for sets (grounding

a model theory for mathematics) is at all served by

discarding the Fregean views. The problem is that

mathematical objects are abstract objects. So,

one has to consider a semantics for descriptively-defined

names.

>

> Clearly the strings "1.000..." and "0.999..." (or "1.(0)" and 0.(9)")

> are themselves not equal, but when taken to represent real numbers they

> are:

> Real("1.000...") = Real("0.999...")

>

> Just as

> DecimalInteger("4") = RomanInteger("IV") = RomanInteger("IIII")

>

> Regarding ontology, there need not be any Platonic integers that are the

> range of these representation functions; the equivalences, independent

> of any range, suffice for all mathematical purposes. I don't know if

> there is an established philosophy of mathematics that takes this view;

> informally I think of it as "representationalism".

In 1971, Tarski and Monk introduced an axiom of informative

identity in their research on cylindric algebras. In another

thread I presented the formulas that follow. It is unclear

to me that mathematics should concern itself over the meaning

of x=x. With the axiom of informative identity, there seems

no need.

1) Ax(x=x)

2) AxAy(x=y <-> Ez(x=z /\ z=y))

3) ExAy(-(yex <-> y=x))

4) Ax(x=V() <-> Ay(-(yex <-> y=x)))

5) AxAy(Az(zex /\ zey) -> x=y)

It is just that the other axioms for a set theory need

to take the universal class into account.