Re: Errors cloning large repo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Shawn O. Pearce wrote:
> Jakub Narebski <jnareb@xxxxxxxxx> wrote:
>> Shawn O. Pearce wrote:
>> 
>>> One thing that you could do is segment the repository into multiple
>>> packfiles yourself, and then clone using rsync or http, rather than
>>> using the native Git protocol.
>> 
>> By the way, it would be nice to have talked about fetch / clone
>> support for sending (and creating) _multiple_ pack files. Beside
>> the situation where we must use more than one packfile because
>> of size limits, it would also help clone as it could send existing
>> packs and pack only loose objects (trading perhaps some bandwidth
>> with CPU load on the server; think kernel.org).
> 
> I've thought about adding that type of protocol extension on
> more than one occasion, but have now convinced myself that it is
> completely unnecessary.  Well at least until a project has more
> than 2^32-1 objects anyway.
> 
> The reason is we can send any size packfile over the network; there
> is no index sent so there is no limit on how much data we transfer.
> We could easily just dump all existing packfiles as-is (just clip
> the header/footers and generate our own for the entire stream)
> and then send the loose objects on the end.

But what would happen if server supporting concatenated packfiles
sends such stream to the old client? So I think some kind of protocol
extension, or at least new request / new feature is needed for that.

Wouldn't it be better to pack loose objects into separate pack
(and perhaps save it, if some threshold is crossed, and we have
writing rights to repo), by the way?

> The client could easily segment that into multiple packfiles
> locally using two rules:
> 
>   - if the last object was not a OBJ_COMMIT and this object is
>   an OBJ_COMMIT, start a new packfile with this object.
> 
>   - if adding this object to the current packfile exceeds my local
>   filesize threshold, start a new packfile.
> 
> The first rule works because we sort objects by type, and commits
> appear at the front of a packfile.  So if you see a non-commit
> followed by a commit, that's the packfile boundary that the
> server had.
> 
> The second rule is just common sense.  But I'm not sure the first
> rule is even worthwhile; the server's packfile boundaries have no
> real interest for the client.

Without first rule, wouldn't client end with strange packfile?
Or would it have to rewrite a pack?

-- 
Jakub Narebski
Poland
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]