>Sometimes numerical solutions "blow up" because of instability in the >numerical scheme. (To see this for yourself, discretize the heat >equation u_t=u_xx, with say zero boundary data and something like a >sawtooth initial condition, and play with the sizes of the t-step and >the x-step; you should very quickly be able to make your solution blow >up. This heat equation has a "closed form" solution which converges >to the zero function, and it is known that for t>0, it is an analytic >function; in particular, the real solution does not blow up.)
One solution is to use simulation, not computation. This is what I've been talking about all the time. The final test for an aircraft design, before the maiden flight, is the wind tunnel - equations alone won't do. Well, we can set up a simulated wind tunnel inside our computers, and we don't necessarily need to put in the Navier-Stokes equations, either in their closed form or in their numerical form.
>On the other hand, sometimes numerical solutions blow up because the >real solution blows up (e.g., solve the ordinary differential equation >dy/dx=y^2, with initial condition y(0)=1). > >In the case of N-S, it is unknown whether the observed numerical >blowups are due to numerical instability or to the mathematics. > >Throwing more computer power at this will never tell us whether the >blowups are real or not, and therefore will not tell us how accurate >our approximate solutions are. (If the numerics blow up but the real >solution does not, then your error is infinite, so you have no >accuracy at all!) Obviously it is of some practical importance to >know how accurate the numerical solutions are.
Many blowups due to numerical instability can be negotiated by increasing the precision. Is 64 bit not enough ? Make a CPU with 4096, 16384, whatever, bits. It may take longer, but it may go a bit further. The thing isn't whether the computation just blows up, the thing is whether the computation blows up because we're throwing it on the hardware we have at hand, and that hardware can be pretty limited depending on the application we have at hand. After all, they're "general purpose" computers, eh ? Numerical instability is a problem of the machine, but it has to do with doing arithmetic that is generated by the mathematical model - but if I can set things up "ab ovo" in a way that just bypasses the equations, maybe I could give it a try.
>This is not directly relevant to a K-12 education discussion, but it >does show that raw computational power may not be sufficient to obtain >correct answers from a model.
If we had orders of magnitude more computational power than we have today, we could perform the numerical method with an infinite-precision numerical package written in Lisp: the problem would vanish. The fact that we don't have enough computational power clearly shows in the intent we have in studying and standardizing things such as floating-point representations and arithmetic: we have 64-bit arithmetic because 32-bit is not enough, then we have 80-bit arithmetic because 64-bit is not enough, and we have 128-bit arithmetic because none of the others are enough, and so on: but the bottomline is, if we had lots more computational power than we have today, we wouldn't need floating point representations, and we could do our computation the way people do it manually: with variable precision depending on the particular need at the very spot of the individual computation.