Re: Floating point performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* On Tue Dec 20 11:46:24 +0100 2011, Marc Glisse wrote:
 
> On Tue, 20 Dec 2011, Ico wrote:
> 
> > Hello,
> >
> > I'm running the program below twice with different command line arguments. The
> > argument is used a a floating point scaling factor in the code, but does not
> > change the algorithm in any way.  I am baffled by the difference in run time of
> > the two runs, since the program flow is not altered by the argument.
> 
> Hello,
> 
> you are thinking about the program flow in terms high level code. Most 
> float operations simply go through the hardware and complete in equal 
> time, but that doesn't include operations on denormals (numbers very close 
> to 0) which are emulated and take forever to complete. Notice that 
> -ffast-math implies "I don't care about that" and makes it fast.

So I could expect 

  gcc -g -ffast-math -O3  test.c

or 

  gcc -g -march=pentium3 -mfpmath=sse -ffast-math -O3  test.c

to solve the issue ? 

Unfortunately, from what I just tested, it does not.

However, it does when using

 gcc -g -msse -mfpmath=sse -O3 -march=native -ffast-math test.c

I will no go read the manual and learn about the *exact* meanings of all
these options to see if I can understand what exactly is going on under
the hood.

Thank you,

Ico


 
-- 
:wq
^X^Cy^K^X^C^C^C^C


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux