Date: Jun 1, 2013 6:06 AM Author: Daniel Lichtblau Subject: Mathematica numerics and... Re: Applying Mathematica to practical problems Others have commented on most issues raised in this subthread. I

wanted to

touch on just a few details.

On May 31, 2:15 am, Richard Fateman <fate...@cs.berkeley.edu> wrote:

> On 5/30/2013 3:09 AM, John Doty wrote:

>

> > Changing the topic here.

>

> > On Tuesday, May 28, 2013 1:49:00 AM UTC-6, Richard Fateman wrote:

>

> >> Learning Mathematica (only) exposes a student to a singularly

> >> erroneous model of computation,

I assume you (as ever) refer to significance arithmetic. If so,

while it is many things, "erroneous" is not one of them. It

operates as designed. You have expressed a couple of reasons

why you find that design not to your liking. The two that most

come to mind: bad behavior in iterations that "should" converge,

and fuzzy equality that you find to be unintuitive 9mostly this

arises at low precision).

> > A personal, subjective judgement. However, I would agree that

> > exposing the student to *any* single model of computation, to the

> > exclusion of others, is destructive.

>

> Still, there are ones that are well-recognized as standard, a common

> basis for software libraries, shared development environments, etc.

> Others have been found lacking by refereed published articles

> and have failed to gain adherents outside the originators. (Distrust

> of significance arithmetic ala Mathematica is not a personal

> subjective opinion only.).

No, but nor is it one that appears to be widely shared. As best I can

tell, most people in the field either do not write about it, or else

make the observation, correctly, that it is simply a first-order

approximation to interval arithmetic.

As such, it has most of the qualities of interval arithmetic. Among

these are the issue that results with "large" intervals are sometimes

not very useful. Knowing that one ended up with a large interval,

however,

can be useful: it tells one that either the problem is not well

conditioned, or the method of solving it was not (and specifically, it

may have been a bad idea to use intervals to assess error bounds).

Significance arithmetic brings a few advantages. One is that computations

are generally faster than their interval counterparts. Another is that it

is "significantly" easier to extend to functions for which interval

methods are out of reach (reason: one can compute derivatives but cannot

always find extrema for every function on every segment). A third is that

it is much easier to extend to complex values.

A drawback, relative to interval arithmetic, is that as a first-order

approximation, in terms of error propagation, significance arithmetic

breaks down at low precision where higher-order terms can cause the

estimates to be off. I will add that, at that point, intervals are going

to give results that are also not terribly useful.

I'm not sure what are the refereed articles referred to above. I suspect

they do not disagree in substantial ways with what I wrote though. That

is to say, error estimates may be too conservative, and at low precision

results may not be useful.

Fixed precision has its own advantages and disadvantages in terms of speed,

error estimation, and the like. We certainly use it in places, even if

significance arithmetic is the default behavior of bignum arithmetic in

top-level Mathematic. For implementation purposes we use both modes.

> > Mathematica applied to real problems is pretty good here.

>

> Maybe, but is "pretty good" the goal, and the occasional identified

> errors be ignored?

We do try to fix many bugs that are brought to our attention. We have a

better track record in some areas than others. I have no reason to

believe that numerics issues have received short shrift though.

Daniel Lichtblau

Wolfram Research