On Dec 3, 10:10 am, Herman Rubin <hru...@skew.stat.purdue.edu> wrote: > On 2012-12-01, Rui Maciel <rui.mac...@gmail.com> wrote: > > > Gratuitous cross-posting to extend this query to s.m.num-analysis. > > Rui Maciel wrote: > >> Is there any information on the relative efficiency of modern CPUs with > >> regards to algebraic operations on integer and floating point data types? > >> If there is, where can I get my hands on it? > >> Thanks in advance, > >> Rui Maciel > > It depends on the operations. Unless the operations are integer > operations, there is always conversion of integer to float. This > conversion is not that efficient; it typically requires putting > the integer in an integer register, shifting if necessary, placing > an exponent in that register, putting it back in memory, reading > it into a floating point register, and subtracting what the result > would have been if it were 0. Even if memory references were not > needed, and shifing not needed, and the same registers could be > used for integers and floats, there are at least two operations > required if the needed constants were already in memory. This > is converting "single precision" integers into "double precision" > floats. > > Getting the integer part of a float is even worse if not hardware.
On Intel, instructions are provided to convert directly from integer to float, and vice-versa, and without referencing memory..
But the OP wanted to know the speed differences between performing operations on integers compared to performing operations on floats (and not the conversions between integers an floats).