On Sat, 9 Sep 2006, Linus Torvalds wrote: > > I _suspect_ that you profiled using "gprof" and the "-pg" flag to > the compiler? Btw, in the absense of oprofile (which is very useful, and has the advantage that you can run many programs multiple times and get a "combined data" output from it all), what you can do (and which is more reliable) is: - the "minor pagefault" count that /usr/bin/time prints out is very useful. It gives a pretty much direct view into what the total memory use of the program was over its lifetime, in a way that very few other things do. - using "gprof" to try to find _potential_ hotspots, but then not trusting the profiling numbers at all for actual improvement, but simply recompiling (witout profiling) and timing it for real after any change. The real timings will almost certainly not match what you thought you'd get from profiling, but now they'll be real numbers. - using "-pg" to link in the profiling code, BUT ONLY AT THE FINAL LINK TIME! This will give you the "% time" and "cumulative seconds" part, but it will mean that you will _not_ get the call-graph, because now the actual code generation is not affected, and the compiler won't be inserting all the call-graph-generation code. Note that the last use makes gprof much more accurate, but it also means that it won't _work_ very well for things that are fast. You usually have a 1/100th of a second profiling setup, so anything that is less than a second won't have much of a profile. So this only works for longer-running things. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html