Sam Vilain <sam@xxxxxxxxxx> writes: > If the uncompressed objects are clustered in the pack, then they might > stream compress a lot better, should they be tranmitted over a http > transport with gzip encoding. That would only have been a sensible optimization in older native pack protocol, where we always exploded the transferred packfile. However, these days, we tend to keep the packfile and re-index at the receiving end (http transport never exploded the packfile and it still doesn't). When used that way, choosing object layout in packfile in such a way to ignore recency order and cluster objects by their delta chain, which you are advocating to reduce the transfer overhead, is a bad tradeoff. Your packs will be kept in the form you chose for transport, which is a layout that hurts the runtime performance. And you keep using that suboptimal packs number of times, getting hurt every time. > @@ -433,7 +434,7 @@ static unsigned long write_object(struct sha1file *f, > } > /* compress the data to store and put compressed length in datalen */ > memset(&stream, 0, sizeof(stream)); > - deflateInit(&stream, pack_compression_level); > + deflateInit(&stream, size >= compression_min_size ? pack_compression_level : 0); > maxsize = deflateBound(&stream, size); > out = xmalloc(maxsize); > /* Compress it */ I very much like the simplicity of the patch. If such a simple approach can give us a clear performance gain, I am all for it. Benchmarks on different repositories need to back that up, though. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html