Re: floating point precision on gcc-4 differs using variables or arrays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eljay Love-Jensen wrote:
Hi Asfand,


All the above is well and good, but please could someone answer the following question? Is it easier for the compiler to work with numbers as variables (float a, float b, etc.) or as arrays (float data[4]) ? By easier, I mean is it easier for it to perform optimisations.


I do not know.

I do know that the only way to really be certain is to profile your code with float, and profile it with double and measure the actual performance difference.

I also know that in C, the float data type is a second rate citizen, compared to double. That may cause code pessimization as the compiler promotes float to double for parameter passing. (C++ makes float a first rate data type; but there are still some C legacy aspects to C++ that haunts float.) Passing in float* and/or working with float arrays will curtail that promotion behavior.

Ah! I'm using C++, as it happens, in a template expression based component wise vector operations framework. I'll use it in my software 3D engine. I want to make several code-compatible vector classes, so I can create several libraries of my code, each using one of them, and then load the appropriate one depending on the processor the user has.


So one conponent-wise one for normal 387 operation for use on Pentium 2's and other stuff that only has a 387 unit, one SSE 1/2 one (using xmmintrin.h or whatever its called :-) for SSE 1/2 using stuff, one 3D Now+ for older Duron and Athlon XP chips, one SSE 3 for my new Opteron 4200+ ( :-) ), one AltiVec using one for when I own IBM, etc.

Why go to all the trouble?  'cos its fun, of course.


In the olden days, the rule of thumb was to avoid floating point data types for performance driven games. But with these new fangled CPUs, floating point data types are as good (and in some cases, better!) than integer types. So if you read something that recommends avoiding floating point, it may be out of step with current hardware. Just a FYI.

Actually, Quake required a floating point unit, 'cos it used floats. I think. I tried comparing adding integers to adding floats, and on a Pentium 2 anyway, it ended up being nearly the same. The trouble comes when converting floating point coordinate values to integers for rasterization - I hope GCC's built-in routines are quick enough :-)


Anyway, the floats will be stored in classes, and as many inline functions will be used as possible. So there's not really a need for passing too many floats around directly, just "here's a pointer to a scene graph, Mr. 3D renderer, render it!" or something.

As I said, lots of operations need to be performed on floating point data (i.e. 3D transformation of objects in memory, rasterisation, etc.) so the smaller the number, the simpler the operation, the better.

I think I'll bung them in a 'float data[4]' array, thanks.


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux