On Tue, Feb 1, 2011 at 11:27 PM, Shawn Pearce <spearce@xxxxxxxxxxx> wrote: > On Tue, Feb 1, 2011 at 05:51, Jakub Narebski <jnareb@xxxxxxxxx> wrote: >> >>> > resumable clone/fetch (and other remote operations) >>> >>> Jakub Narebski seems to be interested in this and Nicolas Pitre has >>> given some good advice about it. ÂYou can get something usable today >>> by putting up a git bundle for download over HTTP or rsync, so it is >>> possible that this just involves some UI (porcelain) and documentation >>> work to become standard practice. >> >> I wouldn't say that: it is Nicolas Pitre (IIRC) who was doing the work; >> I was only interested party posting comments, but no code. >> >> Again, this feature is not very easy to implement, and would require >> knowledge of git internals including "smart" git transport ("Pro Git" >> book can help there). > > I think Nico and I have mostly solved this with the pack caching idea. > ÂIf we cache the pack file, we can resume anywhere in about 97% of the > transfer. ÂThe first 3% cannot be resumed easily, its back to the old > "git cannot be resumed" issue. ÂFixing that last 3% is incredibly I thought the cached pack contained anything and for initial clone, we simply send the pack. What is this 3%? Commit list? Initial commit? > difficult... but resuming within the remaining 97% is a pretty simple > extension of the protocol. ÂThe hard part is the client side > infrastructure to remember where we left off and restart. Narrow/Subtree clone is still just an idea, but can pack cache support be made to resumable initial narrow clone too? -- Duy -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html