Re: Efficiency of initial clone from server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 12 Feb 2007, Jon Smirl wrote:

> I didn't want to cache the packfile, instead I wanted to repack the
> repository and then copy the resulting pack file down the wire. A
> clone would just be a trigger to make sure everything in the repo was
> packed (maybe into multiple packs) before starting to send anything.
> Doing it this way means that everyone benefits from the packing.

Repacking on clone is not the solution at all.

This problem is going to largely be resolved when GIT 1.5.0 gets 
installed on kernel.org.  With latest GIT, pushes are kept as packs on 
the remote end (when they're big enough which is over 100 objects by 
default).  Then repacking multiple packs into one is almost free as most 
of the data is simply copied from one pack and sent over the wire as a 
single pack.

As for the cache problem on kernel.org, that would be largely resolved 
if all kernel related projects were repacked with reference to Linus' 
repository to avoid copying the same set of data all over the place.


Nicolas
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]