Investigating why gcc generates different code for two PGO builds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We turned on PGO for Firefox builds on Linux a few weeks ago.
Although PGO was a large performance win, it also increased the
variance in our automated performance test results dramatically.  It
turns out that the increased performance difference is due to
differences in the binaries themselves.  I've been trying to
understand why this is happening and what we can do to mitigate the
issue.

I have two builds which I generated from the same source which perform
differently on a benchmark.  Both builds were profiled on the same
machine, and the profiling data was generated by a non-interactive
script.  We're using gcc 4.5.2, and this particular analysis is on
32-bit Linux.

I found that GCC is generating substantively different code for one of
the benchmark's hot functions.  The function is too complex for me to
follow its assembly code, so it's not clear to me what the differences
imply about the compiler's assumptions.

What's strange is that when I examine the file generated by |gcov
-abfu objfile|, the branch probabilities for the two builds are
identical to at least two significant figures.

I'd like to understand how the profiling data I have resulted in these
two different binaries.  It seems most likely that gcov isn't telling
me the whole story; perhaps gcc is using the -fprofile-values data to
make its decisions.  If so, how do I examine that data?

Any other ideas are also appreciated.  I'd really like to get this figured out!

Regards,
-Justin

If you want to follow along at home, the Mozilla bug is
https://bugzilla.mozilla.org/show_bug.cgi?id=653961


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux