
Interval vs default, was Re: Mathematica numerics and... Re: Applying
Posted:
Jun 2, 2013 12:24 AM


On 6/1/2013 3:06 AM, Daniel Lichtblau wrote: > Others have commented on most issues raised in this subthread. I > wanted to > touch on just a few details. > > On May 31, 2:15 am, Richard Fateman <fate...@cs.berkeley.edu> wrote: >> On 5/30/2013 3:09 AM, John Doty wrote: >> >>> Changing the topic here. >> >>> On Tuesday, May 28, 2013 1:49:00 AM UTC6, Richard Fateman wrote: >> >>>> Learning Mathematica (only) exposes a student to a singularly >>>> erroneous model of computation, > > I assume you (as ever) refer to significance arithmetic.
Yes.
If so, > while it is many things, "erroneous" is not one of them. It > operates as designed. You have expressed a couple of reasons > why you find that design not to your liking. The two that most > come to mind: bad behavior in iterations that "should" converge, > and fuzzy equality that you find to be unintuitive 9mostly this > arises at low precision).
Yes. I consider these to be errors in the design. I am not complaining that the implementation (erroneously) diverges from the design.
> > >>> A personal, subjective judgement. However, I would agree that >>> exposing the student to *any* single model of computation, to the >>> exclusion of others, is destructive. >> >> Still, there are ones that are wellrecognized as standard, a common >> basis for software libraries, shared development environments, etc. >> Others have been found lacking by refereed published articles >> and have failed to gain adherents outside the originators. (Distrust >> of significance arithmetic ala Mathematica is not a personal >> subjective opinion only.). > > No, but nor is it one that appears to be widely shared. As best I can > tell, most people in the field either do not write about it, or else > make the observation, correctly, that it is simply a firstorder > approximation to interval arithmetic.
There is a wikipedia entry on the topic that (at least as of this time and date) is pretty good. > > As such, it has most of the qualities of interval arithmetic. Among > these are the issue that results with "large" intervals are sometimes > not very useful. Knowing that one ended up with a large interval, > however, > can be useful: it tells one that either the problem is not well > conditioned, or the method of solving it was not (and specifically, it > may have been a bad idea to use intervals to assess error bounds).
The important quality of interval arithmetic that is not preserved is that interval arithmetic (done correctly) preserves validity. That is, the correct answer is guaranteed to be enclosed in the result. > > Significance arithmetic brings a few advantages. One is that computations > are generally faster than their interval counterparts.
I wonder about this. There are libraries that do interval arithmetic, and even interval arithmetic in arbitrary precision. It might be possible to benchmark interval arithmetic against Mathematica arithmetic in some neutral setting. (I think it would be difficult to compare the two fairly in Mathematica ... there might be an issue if an interval's endpoints were computed with significance arithmetic so it would naturally be slower.... But perhaps there is another way. See below, though.)
It seems to me that the default arithmetic is fairly complicated. Each extended float number F has some other number E associated with it (guess: it is a machine float representing something like log of relative error), and calculations require mucking about with F and E both, e.g. some calculations involving condition numbers relating to how fast a function changes as the argument changes.
> Another is that it > is "significantly" easier to extend to functions for which interval > methods are out of reach (reason: one can compute derivatives but cannot > always find extrema for every function on every segment). A third is that > it is much easier to extend to complex values.
I don't understand the derivative argument  intervals can be computed essentially by composition of interval functions. This may be overly pessimistic compared to actually finding extrema, but it is, so far as I know, fairly easy. The extension of intervals to complex values is another issue, but I don't know that significance arithmetic really does better. My understanding is that complex numbers as intervals can be done by enclosures that are 2D in the complex plane (e.g. circles, rectangles, or some other shapes) or a design in which the real and imaginary parts are expressed as separate intervals. I expect that the Mathematica significance version is more like the latter, in which case it shouldn't make much difference in ease of implementation. Is there a place where this distinction is explained? Besides which, if Mathematica allows Interval[{}] data, don't you have to write the program, anyway?
> > A drawback, relative to interval arithmetic, is that as a firstorder > approximation, in terms of error propagation, significance arithmetic > breaks down at low precision where higherorder terms can cause the > estimates to be off. I will add that, at that point, intervals are going > to give results that are also not terribly useful.
An advocate of interval arithmetic (and at least for the moment I will take this position :) ) would say that interval arithmetic guarantees a valid answer. That is, the answer is inside the interval. Sometimes the interval may be very wide, even infinitely wide. But it contains the answer. Not so for significance arithmetic. While it may be appropriately modest for the documentation for N[] to say that N[expr,n] >>>attempts<<< to give a result ... This is not really what interval arithmetic does. It guarantees...
> > I'm not sure what are the refereed articles referred to above. I suspect > they do not disagree in substantial ways with what I wrote though. That > is to say, error estimates may be too conservative, and at low precision > results may not be useful.
There are a few references in the Wikipedia article. There are a few rants that you can find via Google. I think the interaction of significance arithmetic and testing for equality / comparison in Mathematica is potentially hazardous, in large part because it can do bad things quite invisible to the user. I do not know of a simple consistent way of fixing this. While I haven't thought about this in a while, the last time I did, I found it necessary to keep backing out of pieces of the Mathematica arithmetic design quite far. (like a retrenchment of significance arithmetic!) > > Fixed precision has its own advantages and disadvantages in terms of speed, > error estimation, and the like. We certainly use it in places, even if > significance arithmetic is the default behavior of bignum arithmetic in > toplevel Mathematic. For implementation purposes we use both modes.
Yes. > > >>> Mathematica applied to real problems is pretty good here. >> >> Maybe, but is "pretty good" the goal, and the occasional identified >> errors be ignored? > > We do try to fix many bugs that are brought to our attention. We have a > better track record in some areas than others. I have no reason to > believe that numerics issues have received short shrift though.
I appreciate that you try to fix errors. I appreciate that you engage in such discussions! > > ... RJF
appendix, timing.
Timing[(z = 1.11111111111111111; Do [z = 2*z  z, {3600}]; z)] takes 0.0156 seconds. It gives the answer 0 * 10^1700,
Timing[(z = 1.111111111111; Do [z = 2*z  z, {3600}]; z)] takesl 0.0156 seconds. It gives the right answer, 1.11111..
Timing[(z = Interval[1]; Do [z = 2*z  z, {3600}]; z)] takes 0.0625 seconds. It gives the correct answer, Interval[{1,1}].
Slower by a factor of 4. If intervals were economically represented and efficiently manipulated, I expect that one could get a factor of 4 back.
.

