Parallelizing profile driven optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I want to speed up the following process through parallelization. Is it safe to run different instances of the training set in parallel? Will the profiling information of one run overwrite, rather than accumulate, of a parallel run? Are there race
conditions?

I am using profiling driven optimization using the following steps:

  1. Run g++ 4.4.3 with  -fprofile-generate=$HOME/project
  2. Run the resulting executable with a training set
  3. Rerun g++ with exactly the same parameters but now instead of
     -fprofile-generate, I pass g++   -fprofile-use=$HOME/project


I can parallelize step 1 with "make -j2", similarly for step 3. This does not help me much
because step 2 takes most of the time.

My question, can I safely run several runs of step 2 and reach exactly the same profiling information, without the fear of profile information loss or mutation?

Thanks
   Michael

[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux