Re: Huge win, compressing a window of delta runs as a unit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/17/06, Johannes Schindelin <Johannes.Schindelin@xxxxxx> wrote:
At least, the delta-chains should be limited by size (_not_ by number of
deltas: you can have huge deltas, and if you have to unpack 5 huge deltas
before getting to the huge delta you really need, it takes really long).

This is not an obvious conclusion. Reducing our pack from 845MB to
292MB eliminates a huge amount of IO. All of the work I am doing with
this pack has been totally IO bound. The CPU time to get from one
delta to another is minor compared to the IO time increase if you
break the chains up. Each time you break the chain you have to store a
full copy of the file, the IO increase from doing this out weighs the
CPU time reduction.

I would use a two pack model. One pack holds the 292MB archive and the
second pack is made from recent changes. They can both be the same
format since the chains in the second pack won't be very long. We
might want to put a flag into an archive pack that tells git-repack
not to touch it by default.

As for public servers there is an immediate win in shifting to the new
pack format.  Three hour downloads vs one hour, plus the bandwidth
bills. Are the tools smart enough to say this is a newer pack format,
upgrade? It takes far less than two hours to upgrade your git install.

--
Jon Smirl
jonsmirl@xxxxxxxxx
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]