Re: New Feature wanted: Is it possible to let git clone continue last break point?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 04, 2011 at 07:22:20AM -0700, Shawn Pearce wrote:
> On Fri, Nov 4, 2011 at 02:35, Johannes Sixt <j.sixt@xxxxxxxxxxxxx> wrote:
> > Am 11/4/2011 9:56, schrieb Clemens Buchacher:
> >> Cache ... not the pack but the information
> >>    to re-create it...
> >
> > It has been discussed. It doesn't work. Because with threaded pack
> > generation, the resulting pack is not deterministic.

So let the client disable it, if they'd rather have a resumeable
fetch than a fast one.

Sorry if I'm being obstinate here. But I don't understand the
problem and I can't find an explanation in related discussions.

> The information to create a pack for a repository with 2M objects
> (e.g. Linux kernel tree) is *at least* 152M of data. This is just a
> first order approximation of what it takes to write out the 2M SHA-1s,
> along with say a 4 byte length so you can find given an offset
> provided by the client roughly where to resumse in the object stream.
> This is like 25% of the pack size itself. Ouch.

Sorry, I should not have said HAVEs. All we need is the common
commits, and the sha1s of the WANTed branch heads at the time of
the initial fetch. That shouldn't be more than 10 or so in typical
cases.

> This data is still insufficient to resume from. A correct solution
> would allow you to resume in the middle of an object, which means we
> also need to store some sort of indicator of which representation was
> chosen from an existing pack file for object reuse. Which adds more
> data to the stream. And then there is the not so simple problem of how
> to resume in the middle of an object that was being recompressed on
> the fly, such as a large loose object.

How often does the "representation chosen from an existing pack
file for object reuse" change? Long term determinism is a problem,
yes. But I see no reason why it should not work for this short-term
case. So long as the pack is created by one particular git and libz
version, and for this particular consecutive run of fetches, we do
not need to store anything about the pack. The client downloads n
MB of data until the drop. To resume, the client says it already
has n MB of data.

No?

Clemens
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]