In article <firstname.lastname@example.org>, David Bailey <email@example.com> wrote:
> On 15/02/2013 06:56, John Doty wrote: > > >> hard to debug programming language (assembler). > > > > There are always bugs in non-trivial software. > > > > There are always layers of misunderstanding: > > > > 1. The engineers (hardware and software) never fully understand the > > application, and are usually too stubborn to admit this. > > > > 2. The programmers never fully understand the hardware, and are usually too > > stubborn to admit this. > > > > 3. The operators never fully understand the machine, and are usually too > > stubborn to admit this. > > > > Hardware is not perfectly reliable, especially in radiology departments > > where there's much EMI and possibly other problems like stray neutrons (I > > know from experience that unhardened microcontrollers misbehave near the > > MGH cyclotron in Boston, even in "shielded" areas). Operators are often > > distracted, tired, and pressured. And misspelling of silly made-up words is > > common, too ;-) > > > > One must therefore assume that if the hardware can be put into a fatal > > configuration, it will be at some point. When it actually happens, the > > retrospective details of how it happened are misleading. The fundamental > > engineering issue is that one must design so that the ordinary, routine > > failures do not cascade to fatality. By removing the hardware interlock, > > the Therac engineers had designed the system to fail. > > > > > > I would really like to endorse that. I feel that some people like to > scoff at software developers and their supposedly inadequate methods > without proposing a viable alternative. For example, program proving > seems an impossible dream for serious programs, and would in any case > require a formal specification that might itself contain bugs. > > All the most complex artifacts we have are either software, or contain > large amounts of software. Software engineers are routinely required to > deliver a level of complexity unheard of say 50 years ago - yet some > people like to scoff when they sometimes fail. > > Anything that is extremely complex is susceptible to mistakes - > particularly if it can't really be tested until it is finished. Take for > example, the Mars probe that crashed because of a mixup over physical > units. Clearly such a trivial mistake would be unthinkable in a simpler > project - I presume it got overlooked because it was hidden among vast > amounts of other detail.
I think there was also software reuse involved, if I recall correctly.
Anyway, for comparison, the large radars I work on typically have 10^6 lines of C/C++in them, and command and control systems are at least a factor of ten larger still.
Suffice it to say that getting most of the bugs found and fixed takes a very large effort spread out over at least a few years, if all goes well, which isn't always.
My impression is that Mathematica is tens of millions of lines of their own variant of C. What saves Wolfram is that this codebase accumulated over the years, rather than being done in one big effort.