Re: How do parallel builds scale?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ralf,

Thanks for your feedback!

Ralf Wildenhues <Ralf.Wildenhues@xxxxxx> writes:

> * Ludovic CourtÃs wrote on Thu, Mar 03, 2011 at 04:42:52PM CET:
>> I ran a series of build time measurements on a 32-core machine, with
>> make -jX, with X in [1..32], and the results are available at:
>> 
>>   http://hubble.gforge.inria.fr/parallel-builds.html
>
> Thank you!  Would you be so kind and also describe what we see in the
> graphs?  I'm sorry but I fail to understand what they are showing, what
> the axes really mean, and how to interpret the results.

Y is the number of packages with a speedup <= X.  Does it help?

The first series of curves considers all the packages that were built;
the second series considers the 25% of packages with the longest
sequential build time, etc.

Within each series, thereâs one graph for the overall build time, one
for the âbuildâ phase (âmakeâ), and one for the âcheckâ phase (âmake
checkâ).

Iâm open to suggestions on how to improve the presentation since
apparently thereâs room for improvement.  ;-)

>> There are packages whose configuration phase is noticeably longer than
>> the build time.
>
> Yes, we knew that.  Can you please also mention whether you used a
> config.site file?

No.

> Since using a config.cache file for one-time builds is not relevant,
> I'm assuming that is not necessary to know.  But it would be fairly
> cool to know how development could be sped up.  E.g., one thing you
> could try is, after configure -C once, save the config.cache file
> somewhere, remove the build directory, rerun configure with
> CONFIG_SITE pointing to that moved cached file.  That could give a
> more realistic impression of how expensive configure overhead is while
> developing.  (I understand that that isn't so interesting for a
> distribution.)

Itâs a complete distro build, starting from glibc/gcc/binutils.  So itâs
different from what you would observe while developing.

Using a config.cache while building the distro would require some work
(in Nixpkgs at least).  More importantly it would be quite fragile IMO,
as we discussed at FOSDEM.

Regarding the âconfigureâ overhead,
<http://hubble.gforge.inria.fr/parallel-build-details.html> gives an
idea for each package.  Perhaps it I could synthesize that somehow.

> I suppose several packages' check bits would benefit from Automake's
> parallel-tests feature.

Surely.

> A few of the packages (using an Autotest test suite: Autoconf, Bison)
> would benefit from you passing TESTSUITEFLAGS=-jN to make.

Oh, I didnât know that.  So âmake -jNâ isnât enough for Autotest?

> FWIW, parallelizability of Automake's own 'make check' has been improved
> in the git tree (or so at least I hope).

Yeah, and its âmake checkâ phase already scales relatively well.

> I am fairly surprised GCC build times scaled so little.  IIRC I've seen
> way higher numbers.  Is you I/O hardware adequate?

I think so.  :-)

> Did you use only -j or also -l for the per-package times (I would
> recommend to not use -l).

I actually used â-jX -lXâ.  What makes you think -l shouldnât be used?

The main problem Iâm interested in is continuous integration on a
cluster.  When building a complete distro on a cluster, thereâs
parallelism to be exploited at the level of package composition (e.g.,
build GCC and Glibc at the same time, each with N/2 cores), and
parallelism within a build (âmake -jXâ).

Suppose youâve scheduled GCC and Glibc on a 4-core machine, you want
each of them to use 2 cores without stepping on each otherâs toes.
I think -l2 may help with this.

WDYT?

Thanks,
Ludoâ.

_______________________________________________
Autoconf mailing list
Autoconf@xxxxxxx
http://lists.gnu.org/mailman/listinfo/autoconf



[Index of Archives]     [GCC Help]     [Kernel Discussion]     [RPM Discussion]     [Red Hat Development]     [Yosemite News]     [Linux USB]     [Samba]

  Powered by Linux