Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math.num-analysis.independent

Topic: integer Vs floating point efficiency?
Replies: 13   Last Post: Dec 5, 2012 8:41 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Gordon Sande

Posts: 116
Registered: 5/13/10
Re: integer Vs floating point efficiency?
Posted: Dec 5, 2012 8:41 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On 2012-12-05 19:58:42 -0400, Herman Rubin said:

> On 2012-12-04, Gordon Sande <Gordon.Sande@gmail.com> wrote:
>> On 2012-12-04 09:58:05 -0400, Rui Maciel said:
>
>>> Gordon Sande wrote:
>
>>>> Micro optimization is rarely of great importance as the effects of
>>>> large scale algorithm
>>>> issues dominate in virtually all situations. If you had one of the
>>>> situations where
>>>> instruction timing was an issue you probaly would not have asked the
>>>> question. Is the old
>>>> story of the price of yachts. If you have to ask then you probably can
>>>> not afford one!

>
>>> This isn't a micro optimization issue. The reason why it's necessary to
>>> understand the relative efficiency of certain data types is to be able to
>>> make adequade decisions regarding how certain algorithms are implemented.

>
>>> Instruction latency, in this context, is only important to get an estimate
>>> of the cost of using specific numerical data types, because if you know
>>> beforehand that implementing an algorithm as algorithm<double>() is
>>> significantly more or less efficient than implementing it instead as
>>> algorithm<long int>(), you will be able to choose the best way to implement
>>> it.

>
>>> So, it isn't a micro optimization issue. It's instead a best practices
>>> issue.

>
>> Which is why it is unfortunate that you chose to snip the parts about memory
>> usage and costs. That is often the most important part of modern good
>> practices.
>> It used to be that memory was limited and fast but now it is abundent and of
>> varying degress of slowness. That tends to change processor issues into micro
>> optimization issues.

>
> Fast memory is still limited. When Bradford Johnson and I were
> working on our paper on fast generation of exponential and normal
> random variables, we found that the use of 2^k tables got better
> as k went to 8, essentially as expected, but took a jump backward
> going to 9. The larger the k, the more computationally efficient
> the algorithm. The degrees of slowness are important, as are
> transfers, which affect the instruction flow.


The fast memory is a cache for a slower memory. When you exceed the cache size
the memory becomes slower. On my desktop there is a 32kB fast cache, a 12MB
medium speed cache and 6GB of main memory and even more paging memory. So I can
pretend to have lots of memory if I ignore the differing speeds but have
to pay attention to the 32kB or 12MB limits for other purposes. In the bad old
days there was 32kW (128kB) and that was it except for do-it-myself use
of tapes.
In the bad old days the processor never waited for any memory and was
not confused
by instruction pipelining. Some complexity models are based on the bad old days
designs with minor simplifications such as counters never overlowing
due to finite
word sizes etc.

> The algorithms compared did not make use of the efficiency of reading
> bytes, so this was not the cause of the problem. I had little
> difficulty in discerning the reason.
>

>>> Rui Maciel





Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.