Re: Profiling compilation time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ian Lance Taylor wrote:
Yang Zhang <yanghatespam@xxxxxxxxx> writes:

Ian Lance Taylor wrote:
Use the -ftime-report option.
Neat!  Is there something similar for breaking down the time into time
spent on individual files so that I can see which #included files are
the most costly?  (If you treat the compiler as a template language
interpreter then you can even imagine getting call-graph profiling
results.)

That's a good idea.  Unfortunately, I don't think there is anything like
that at present.  It would be hard, and perhaps meaningless, to do that
for the IPA passes, but it could be done for the frontends and for the
general optimization passes (where I suppose the time for inlined
functions would go the function into which they are inlined.)

Ian

For me, -ftime-report shows that compilation time is by far dominated by parsing, so that's what I'm most interested in. To that end, would a valid poor man's profiling approach be to simply measure the time of #including individual files into an otherwise empty source file? Or would it be necessary to actually "trigger" the parsing somehow by using the headers' contents (particularly for templated entities)?
--
Yang Zhang
http://www.mit.edu/~y_z/

[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux