On 11/24/2010 2:07 PM, Bob Plantz wrote:
On 11/24/2010 7:08 AM, Tim Prince wrote:
It depends. For Intel CPUs, there is no performance gain for single
precision scalar operations, other than divide and sqrt, where SSE
performance could be matched by the (impractical) setting of 24-bit
387 precision. Current CPUs (and compilers) support vectorization
performance gain over a wider variety of situations than early SSE CPUs.
It's not that I don't want the excess precision (looks like a Good
Thing to me; less rounding errors). It's that I'm not sure if I can
count on it being there. Would the above routine be a good enough
check to see whether excess precision is there?
The consistently reliable way to use x87 80-bit precision is with
long double data types. This precludes many important optimizations.
I'm not a Windows programmer, but I ran across
this:http://msdn.microsoft.com/en-us/library/ff545910(VS.85).aspx
<http://msdn.microsoft.com/en-us/library/ff545910%28VS.85%29.aspx>
which may raise some concerns about portability of code.
--Bob
As Windows intends to turn off 80-bit mode entirely, by requiring 32-bit
apps to set 53-bit precision mode, and by setting it in the OS prior to
giving control to a 64-bit mode app, you face a different set of
problems there. The normal expectation for compilers which use
Microsoft libraries (including mingw gcc) is that you don't attempt
64-bit precision mode. Perhaps one of your points is that Microsoft
libraries aren't validated for the case where you change from the
expected 53-bit precision mode. But that seemed to me outside the
original scope of the thread, particularly since libraries not supported
by/for gcc aren't normally discussed here, and Microsoft certainly
doesn't discuss gcc.
--
Tim Prince