Re: git clone sending unneeded objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 27 Sep 2009, Jason Merrill wrote:

> On 09/26/2009 10:04 PM, Shawn O. Pearce wrote:
> > Actually, if those refs have not changed, quickfetch should kick in
> > and realize that all 410610 objects are reachable locally without
> > errors, permitting the client to avoid the object transfer.
> > 
> > However, if *ANY* of those refs were to change to something you
> > don't actually have, quickfetch would fail, and we would need to
> > fetch all 410610 objects.
> 
> Right.  That seems unfortunate to me; couldn't fetch do a bit more checking
> before it decides to download the whole world again?

The quickfetch test could be turned into a filter so refs that are 
already available locally could simply not be fetched on a per ref 
basis.  But that would be a rather expensive test which couldn't keep 
its "quick" qualifier anymore, and so for a case that shouldn't have 
happened normally anyway if git didn't have a bug with its clone 
operation as I've explained already.


Nicolas
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]