On 2013-01-11 16:46:58 -0400, Tom Stockfisch said:
> I need to force all double precision calculations to proceed strictly > in 64-bit -- no 80-bit > intel register arithmetic. Can someone tell me the current function calls or > command line settings to achieve this on both MacOS and linux? > > I need this to be able to track down platform-dependent differences in > numerical code.
This would seem to be rather compiler dependent. Good old RTFM would seem to be a possible source of enlightenment. Different languages have different conventions.
Such options tend to have names like fast arithmetic or safe arithmetic. It gets confused with forcing all intermediates into storage. There is also the issue of fused add-multiplies. Even when all that is sorted out you can still have differing evaluation orders as complex expressions are assembled. And check the quality of input/output conversions as full bit accuracy is not always present.
In other words, over precision of intermediates is not the only source of differences between various supposedly similar implementations.
Happy reading the arcane back sections of whatever compiler you are using.