Shawn Pearce <spearce@xxxxxxxxxxx> writes: > On Sun, May 15, 2011 at 14:37, Johan Herland <johan@xxxxxxxxxxx> wrote: >> The new --max-object-count option behaves similarly to --max-pack-size, >> except that the decision to split packs is determined by the number of >> objects in the pack, and not by the size of the pack. > > Like my note about pack size for this case... I think doing this > during writing is too late. We should be aborting the counting phase > if the output pack is to stdout and we are going to exceed this limit. Well, even more important is if this is even useful. What is the user trying to prevent from happening, and is it a useful thing? I am not interested in a literal answer "The user is trying to prevent a push that pushes too many objects in a single push into a repository". I am questioning why does anybody even care about the object count per-se. I think "do not hog too much disk" (i.e. size) is an understandable wish, and max-pack-size imposed on --stdout would be a good approximation for that. I would understand "this project has only these files, and pushing a tree that has 100x leaves than that may be a mistake" (i.e. recursive sum of number of entries of an individual tree). I would also sort-of understand "do not push too deep a history at once" (i.e. we do not welcome pushing a wildly diverged fork that has been allowed to grow for too long). But I do not think max-object-count is a good approximation for either to be useful. Without a good answer to the above question, this looks like a "because we could", not "because it is useful", feature. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html