On 4/5/07, Nicolas Pitre <nico@xxxxxxx> wrote:
I still consider this feature to make no sense.
Well, suppose I'm packing my 55GB of data into 2GB packfiles. There seemed to be some agreement that limiting packfile size was useful. 700MB is another example. Now, suppose there is an object whose packing would result in a packfile larger than the limit. What should we do? (1) Refuse to run. This solution means I can't pack my repository. (2) Pack the object any way and let the packfile size exceed my specification. Ignoring a clear preference from the user doesn't seem good. (3) Pack the object by itself in its own pack. This is better than the previous since I haven't wrapped up any small object in a pack whose size I dont't want to deal with. The resulting pack is too big, but the original object was also too big so at least I haven't made the problem worse. But why bother wrapping the object so? I just made the list of packs to look through longer for every access, instead of leaving the big object in the objects/xx directories which are already used to handle exceptions (usually meaning more recent). In my 55GB example, I have 9 jumbo objects, and this solution would more than double the number of packs to step through. Having them randomly placed in 256 subdirectories seems better. (4) Just leave the jumbo object by itself, unpacked. What do you think? Thanks, -- Dana L. How danahow@xxxxxxxxx +1 650 804 5991 cell - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html