On 2/6/2013 2:52 AM, David Bailey wrote: > On 05/02/2013 08:06, Richard Fateman wrote: > >> >> Here are some thoughts, though of course "proving" the correctness of >> anything, whether a program or a proof of a theorem continues to be >> an area for research. >> I'd feel better about using software which had these characteristics: >> >> 1. A formal rigorous definition of syntax and semantics. >> 2. More than one implementation, perhaps one that is open source. >> 3. Widely available and widely used by top practitioners of >> (for instance) scientific numerical computation. >> 4. Perhaps standardized by a committee responsive to the >> rigors of ANSI or IEEE. >> 5. Excellent error checking, debugging, profiling tools. >> >> I suppose I could think of more. >> >> How many of these are lacking in Mathematica? > > Well I guess we would all love perfect software, and perfect hardware > with infinite performance, but I am really quite curious as to what > practical advice you would give to someone with the sort of > symbolic/numerical problem that Mathematica is good at - what would you > tell them to use?
There are a number of competitors to Mathematica, some free. This would correspond to item 2, sort of. That is, there are other implementations of symbolic math, though not of Mathematica per se. Unfortunately, there are some "bugs" that are system independent -- that is, ALL the authors/programmers fell into the same hole. Therefore it is possible that several systems will agree on the same wrong answer! There are some systems for primarily numerical computation that are far more widespread than Mathematica, corresponding to item 3, at least for numerics.
I personally find the debugging facilities in Mathematica to be quite difficult to use. I speculate that it is because the "working model" I have in my mind (and I suspect others share it) is that I am writing programs, composing them in various ways. In reality, the internal evaluation strategy of Mathematica is to apply rules to transform expressions. So while I'm looking for a clue as to which "program" has the bug, the debugging info shows transformations of expressions. This jerking back into the reality of Mathematica's actual nature of evaluation generally prompts me to shy away from the use of Trace. I don't know if it is inevitable, but Trace in particular, and debugging in general in Mathematica seem to me to be quite weak. To the extent that some people might like Mathematica for debugging, it may be that they are comparing it to a non-interactive system (like batch FORTRAN) which will crash and print out a hexadecimal core dump.
There is a profiler in the Mathematica Workbench. I have not tried to use it but from the documentation it seems to suffer from the same viewpoint as Trace. It also requires running Mathematica in "debug" mode; I don't know how that affects timings.
So in regard to item 5, nearly any alternative system that has an appealing debugging framework. Which would be almost anything that is not bound up in the rule-based paradigm of Mathematica.
>(I don't quite know if mentioning other products in > this context is permitted here, but do you even have an existing piece > of software in mind?) > > As regards standardizing committees, I don't think their efforts have > always been positive.
Standards vary in usefulness. They can be affected by politics, crazy participants with technical disagreements, etc. The deliberations sometimes are more interesting than the results. For example, there is now an IEEE committee working on interval arithmetic. I do not know if anyone at WRI is on the committee. I expect that Mathematica Intervals will not conform to that standard. Whether this is a positive or negative consequence depends on your viewpoint.
> For example, I would say that Fortran 77 has not > been improved by the later standards, that added enormous complexity, > and a certain unpredictability in the performance of the more 'advanced' > constructs.
I think the later standards (now FORTRAN 2008 ?) become complicated in part to deal with eventualities that affect real computing circumstances, including (say) newer, standardized floating-point operations [exceptions, etc], multiprocessing, etc. I doubt that anyone approaches the revision of a standard with the goal of increased complexity. It's the consequence of addressing some issue...