On Fri, Oct 01, 2021 at 10:52:15AM +0800, Jiang Xin wrote: > > Sure, it is called max_INPUT_object_size and we can say we are not > > limiting the final disk size, and that might be a workable excuse > > to check based on the obj->size here, but then its usefulness from > > the point of view of end users, who decide to set the variable to > > limit "some" usage, becomes dubious. > > Just like what I replied to Ævar, if the max_input_object_size is > greater than core.bigFileThreshold, is it save to save the size here > is almost the actual "file size"? If we are storing a pack with index-pack, the on-disk size will match exactly this input size. If we unpack it to loose, then big files don't tend to have deltas or to compress with zlib, but that is not always the case. I have definitely seen people try to store gigantic text files. If your goal is introduce a user-facing object-size limit, then I think the "logical" size of the uncompressed object is the only thing that makes sense. Everything else is subject to change, and can be gamed in weird ways. If your goal is to avoid malicious pushers causing you to allocate too much memory, then you might want to have some limits on the compressed sizes you'll deal with, especially for deltas. But I don't think the checks here do that, because I can send a small delta that reconstructs a much larger object (which we'd eventually reconstruct in order to compute its sha1). -Peff