How about making the length of delta chains an exponential function of the number of revs? In Mozilla configure.in has 1,700 revs and it is a 250K file. If you store a full copy every 10 revs that is 43MB (prezip) of data that almost no one is going to look at. The chains lengths should reflect the relative probability that someone is going to ask to see the revs. That is not at all a uniform function. Personally I am still in favor of a two pack system. One archival pack stores everything in a single chain and size, not speed, is it's most important attribute. It is marked readonly and only functions as an archive; git-repack never touches it. It might even use a more compact compression algorithm. The second pack is for storing more recent revisions. The archival pack would be constructed such that none of the files needed for the head revisions of any branch are in it. They would all be in the second pack. After time the second pack may grow large and another archival pack can be created. The first one would still be maintained in it's readonly form. git could be optimized to always search for objects in non-archival packs before even opening the index of an archival one. This may be a path to partial repositories. Instead of downloading the real archival pack I could download just an index for it. The index entries would be marked to indicate that these objects are valid but not-present. -- Jon Smirl jonsmirl@xxxxxxxxx - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html