Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math.num-analysis.independent

Topic: integer Vs floating point efficiency?
Replies: 13   Last Post: Dec 5, 2012 8:41 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Herman Rubin

Posts: 364
Registered: 2/4/10
Re: integer Vs floating point efficiency?
Posted: Dec 5, 2012 6:58 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On 2012-12-04, Gordon Sande <Gordon.Sande@gmail.com> wrote:
> On 2012-12-04 09:58:05 -0400, Rui Maciel said:

>> Gordon Sande wrote:

>>> Micro optimization is rarely of great importance as the effects of
>>> large scale algorithm
>>> issues dominate in virtually all situations. If you had one of the
>>> situations where
>>> instruction timing was an issue you probaly would not have asked the
>>> question. Is the old
>>> story of the price of yachts. If you have to ask then you probably can
>>> not afford one!


>> This isn't a micro optimization issue. The reason why it's necessary to
>> understand the relative efficiency of certain data types is to be able to
>> make adequade decisions regarding how certain algorithms are implemented.


>> Instruction latency, in this context, is only important to get an estimate
>> of the cost of using specific numerical data types, because if you know
>> beforehand that implementing an algorithm as algorithm<double>() is
>> significantly more or less efficient than implementing it instead as
>> algorithm<long int>(), you will be able to choose the best way to implement
>> it.


>> So, it isn't a micro optimization issue. It's instead a best practices
>> issue.


> Which is why it is unfortunate that you chose to snip the parts about memory
> usage and costs. That is often the most important part of modern good
> practices.
> It used to be that memory was limited and fast but now it is abundent and of
> varying degress of slowness. That tends to change processor issues into micro
> optimization issues.


Fast memory is still limited. When Bradford Johnson and I were
working on our paper on fast generation of exponential and normal
random variables, we found that the use of 2^k tables got better
as k went to 8, essentially as expected, but took a jump backward
going to 9. The larger the k, the more computationally efficient
the algorithm. The degrees of slowness are important, as are
transfers, which affect the instruction flow.

The algorithms compared did not make use of the efficiency of reading
bytes, so this was not the cause of the problem. I had little
difficulty in discerning the reason.

>> Rui Maciel




--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.