Re: performance on repack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 14 Aug 2007, Jon Smirl wrote:

> On 8/14/07, Nicolas Pitre <nico@xxxxxxx> wrote:
> > On Mon, 13 Aug 2007, Shawn O. Pearce wrote:
> >
> > > I'm not sure its that complex to run all try_delta calls of the
> > > current window in parallel.  Might be a simple enough change that
> > > its actually worth the extra complexity, especially with these
> > > multi-core systems being so readily available.  Repacking is the
> > > most CPU intensive operation Git performs, and the one that is also
> > > the easiest to make parallel.
> > >
> > > Maybe someone else will beat me to it, but if not I might give such
> > > a patch a shot in a few weeks.
> >
> > Well, here's my quick attempt at it.  Unfortunately, performance isn't
> > as good as I'd expected, especially with relatively small blobs like
> > those found in the linux kernel repo.  It looks like the overhead of
> > thread creation/joining might be significant compared to the actual
> > delta computation.  I have a P4 with HT which might behave differently
> > from a real SMP machine, or whatever, but the CPU usage never exceeded
> > 110% according to top (sometimes it even dropped below 95%). Actually, a
> > git-repack gets much slower due to 2m27s of system time compared to
> > 0m03s without threads.  And this is with NPTL.
> 
> Thread creation/destruction overhead is way too high to make these
> threads for every delta.
> 
> Another strategy is to create four worker threads once when the
> process is loaded. Then use synchronization primitives to feed the
> threads lumps of work. The threads persist for the life of the
> process.

Still, those synchronization primitives would have to be activated for 
every delta which might also add some overhead.

But there is another issue to consider: delta searching is limited by 
previous results for the same delta.  If first attempt for a delta 
produces a 10x reduction, then the next delta computation has to produce 
less than 1/10 the original object size or it is aborted early. And so 
on for subsequent attempts.  When performing delta computations in 
parallel for the same target then early delta computation abort cannot 
occur since no result is initially available to further limit delta 
processing.

Segmenting the list of objects to deltify into sub-lists for individual 
threads solves both issues.


Nicolas
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux