On 2/18/2013 3:00 AM, djmpark wrote: > For critical applications isn't one of the best methods to have Team A that > writes the code, and Team B that tries to break it by throwing input at it? > It really helps if they hate each other's guts and Team B has skeptical > people who will be using the application. > sometimes 2 teams don't work the way you think. Many years ago when people used punched cards, a high-quality operation would have two data-entry people typing from hand-written input, where the first one would punch holes in a card and the second would "verify" the card deck -- essentially type the same thing, and if the two entries agreed, a little notch would be punched at a card edge. A "verified" deck of cards, it was believed would have no data-entry errors.
If the original handwritten source was a FORTRAN program,it might have lines like
10 DO 20 I=100,200
And the keypunchers, who might not know anything about the content, might type zero instead of letter Oh in one or more places. Or the reverse. The verifying keypuncher might get a mismatch on 0 or O, and then would simply type the other one, so it would match. It would not of course guarantee correctness, just matching after perhaps an initial mismatch.
Punch cards and verifier technology pretty much vaporized when programmers became responsible for typing their own programs.
Similarly: My guess is that the 2-team approach is not nearly as foolproof as one might think, because the second team might be bulldozed into thinking the first team's code represented the best informed and most appropriate solution, even when it is not. This may especially be the case when (as is common), the application specification is incomplete or even wrong, and what is correct may be more a matter of opinion than absolute truth.
Anyway, getting back to Mathematica and Lisp... Since Lisp programs tend to be short, there are fewer opportunities for bugs. Mathematica programs can be short too, but the irregular syntax makes them harder to read. See djmpark's comment about FullForm below. Lisp is like FullForm all the time.
Since the semantics for Lisp programs tend to be understood by (at least, moderately experienced) programmers, there is less opportunity for a mismatch between specification and code. (Compared to Mathematica).
Incidentally, I have an excellent name for the "language of Mathematica" but Steve C (the moderator) won't allow me to mention it. RJF
> > David Park > email@example.com > http://home.comcast.net/~djmpark/index.html > > > > From: David Bailey [mailto:firstname.lastname@example.org] > > > On 15/02/2013 06:56, John Doty wrote: > >>> hard to debug programming language (assembler). >> >> There are always bugs in non-trivial software. >> >> There are always layers of misunderstanding: >> >> 1. The engineers (hardware and software) never fully understand the > application, and are usually too stubborn to admit this. >> >> 2. The programmers never fully understand the hardware, and are usually > too stubborn to admit this. >> >> 3. The operators never fully understand the machine, and are usually too > stubborn to admit this. >> >> Hardware is not perfectly reliable, especially in radiology >> departments where there's much EMI and possibly other problems like >> stray neutrons (I know from experience that unhardened >> microcontrollers misbehave near the MGH cyclotron in Boston, even in >> "shielded" areas). Operators are often distracted, tired, and >> pressured. And misspelling of silly made-up words is common, too ;-) >> >> One must therefore assume that if the hardware can be put into a fatal > configuration, it will be at some point. When it actually happens, the > retrospective details of how it happened are misleading. The fundamental > engineering issue is that one must design so that the ordinary, routine > failures do not cascade to fatality. By removing the hardware interlock, the > Therac engineers had designed the system to fail. >> >> > > I would really like to endorse that. I feel that some people like to scoff > at software developers and their supposedly inadequate methods without > proposing a viable alternative. For example, program proving seems an > impossible dream for serious programs, and would in any case require a > formal specification that might itself contain bugs. > > All the most complex artifacts we have are either software, or contain large > amounts of software. Software engineers are routinely required to deliver a > level of complexity unheard of say 50 years ago - yet some people like to > scoff when they sometimes fail. > > Anything that is extremely complex is susceptible to mistakes - particularly > if it can't really be tested until it is finished. Take for example, the > Mars probe that crashed because of a mixup over physical units. Clearly such > a trivial mistake would be unthinkable in a simpler project - I presume it > got overlooked because it was hidden among vast amounts of other detail. > > Anyone using Mathematica (or any other software) for a serious task has to > take responsibility for the results he/she uses, and even then, there are > still some risks involved. So for a very trivial example if you decide to > check: > > In:= Integrate[Exp[ax]x,x] > > Out= (E^ax x^2)/2 > > by doing: > > In:= D[%,x] > > Out= E^ax x > > Your check will return the original expression, and maybe lead you to > believe you have the answer you wanted! Maybe if you recognise that you are > prone to make that type of mistake, you should examine anything important in > FullForm - but ultimately the user has to be responsible. > > David Bailey > http://www.dbaileyconsultancy.co.uk > >