On Mon, 12 March 2007, Shawn O. Pearce wrote: > Jakub Narebski <jnareb@xxxxxxxxx> wrote: >> But what would happen if server supporting concatenated packfiles >> sends such stream to the old client? So I think some kind of protocol >> extension, or at least new request / new feature is needed for that. > > No, a protocol extension is not required. The packfile format > is: 12 byte header, objects, 20 byte SHA-1 footer. When sending > concatenated packfiles to a client the server just needs to: > > - figure out how many objects total will be sent; > - send its own (new) header with that count; > - initialize a SHA-1 context and update it with the header; > - for each packfile to be sent: > - strip the first 12 bytes of the packfile; > - send the remaining bytes, except the last 20; > - update the SHA-1 context with the packfile data; > - send its own footer with the SHA-1 context. > > Very simple. Even the oldest Git clients (pre multi-ack extension) > would understand that. That's what's great about the way the > packfile protocol and disk format is organized. ;-) It would be a very nice thing to have, if it is backwards compatibile. It would ease load to server on clone, even if packs are divided into large tight archive pack and perhaps a few more current packs to make dumb transport do not neeed to download everything on [incremental] fetch. On fetch... perhaps there should be some configuration variable which would change balance between load and bandwidth used... And automatic splitting large pack on client side would help if for example we have huge repository (non-compressable binaries) and client has smaller filesystem limit on maximum file size than server. -- Jakub Narebski Poland - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html