Hi! I'm trying to build Git with PGO (for a private distribution) and I have two questions about the specifics of the profiling process. 1. The INSTALL doc says that the profiling pass has to run the test suite using a single CPU, and the Makefile `profile` target also encodes this rule: > As a caveat: a profile-optimized build takes a *lot* longer since the > git tree must be built twice, and in order for the profiling > measurements to work properly, ccache must be disabled and the test > suite has to be run using only a single CPU. <...> ( https://github.com/git/git/blob/master/INSTALL#L54-L59 ) > profile:: profile-clean > $(MAKE) PROFILE=GEN all > $(MAKE) PROFILE=GEN -j1 test > @if test -n "$$GIT_PERF_REPO" || test -d .git; then \ > $(MAKE) PROFILE=GEN -j1 perf; \ ( https://github.com/git/git/blob/master/Makefile#L2350-L2352 ) However, some cursory searching tells me that gcc is equipped to handle concurrent runs of an instrumented program: > > It is unclear to me if one can safely run multiple processes concurrently. > > is there any risk of corruption or overwriting of the various "gcda” files if different processes attempt to write on them? > > The gcda files are accessed by proper locks, so you should be sa[f]e. ( https://gcc-help.gcc.gnu.narkive.com/0NItmccw/is-it-safe-to-generate-profiles-from-multiple-concurrent-processes#post1 ) As far as I understand, the profiling data collected does not include timing information or any performance counters. What am I missing? Why is it not possible to run the test suite with parallelism on the profiling pass? 2. The performance test suite (t/perf/) uses up to two git repositories ("normal" and "large") as test data to run git commands against. Does the internal organization of these repositories matter? I.e., does it matter if those are "real-world-used" repositories with overlapping packs, cruft, loose objects, many refs etc., or can I simply use fresh clones of git.git and linux.git without loss of profile quality? Thanks, -- Ivan Shapovalov / intelfx /