On Tue, 20 Dec 2011, Ico wrote:
Hello,
I'm running the program below twice with different command line arguments. The
argument is used a a floating point scaling factor in the code, but does not
change the algorithm in any way. I am baffled by the difference in run time of
the two runs, since the program flow is not altered by the argument.
Hello,
you are thinking about the program flow in terms high level code. Most
float operations simply go through the hardware and complete in equal
time, but that doesn't include operations on denormals (numbers very close
to 0) which are emulated and take forever to complete. Notice that
-ffast-math implies "I don't care about that" and makes it fast.
--
Marc Glisse