On 11/24/2010 7:08 AM, Tim Prince wrote:
It depends. For Intel CPUs, there is no performance gain for single
precision scalar operations, other than divide and sqrt, where SSE
performance could be matched by the (impractical) setting of 24-bit
387 precision. Current CPUs (and compilers) support vectorization
performance gain over a wider variety of situations than early SSE CPUs.
It's not that I don't want the excess precision (looks like a Good
Thing to me; less rounding errors). It's that I'm not sure if I can
count on it being there. Would the above routine be a good enough
check to see whether excess precision is there?
The consistently reliable way to use x87 80-bit precision is with long
double data types. This precludes many important optimizations.
I'm not a Windows programmer, but I ran across this:http://msdn.microsoft.com/en-us/library/ff545910(VS.85).aspx <http://msdn.microsoft.com/en-us/library/ff545910%28VS.85%29.aspx>
which may raise some concerns about portability of code.
--Bob