Re: Initial push of a fully packed repo - why repack?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 17 Apr 2007, Martin Langhoff wrote:

> On 4/17/07, Nicolas Pitre <nico@xxxxxxx> wrote:
> > On Tue, 17 Apr 2007, Martin Langhoff wrote:
> > > Does it make sense to detect and optimise for this case?
> > Maybe...  Although the second repack during the push should be much much
> > faster than the first one.
> 
> It is - but it still burns through perhaps 1 minute of CPU and IO
> rewriting the exact same pack as you can see:

Sure.  On the IO you can't save.  You'll have to copy the packanyway and 
with all objects being "reused" the pack-objects code is basically not 
doing much more than a straight cp would do.

What is costly is figuring out if the single pack you have actually
contains all the objects you wish to push, and _only_ the objects you 
wish to push.  That is the real cost.  By the time all those objects are 
listed and accounted then repacking is basically copying the data 
over with almost no CPU usage.  In other words, the cost to determine if 
it is OK to simply send the pack you already have and actually 
sending it would be sensibly the same.


Nicolas
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]