On Fri, Oct 1, 2021 at 12:50 AM Junio C Hamano <gitster@xxxxxxxxx> wrote: > > Han Xin <chiyutianyi@xxxxxxxxx> writes: > > > @@ -519,6 +520,8 @@ static void *unpack_raw_entry(struct object_entry *obj, > > shift += 7; > > } > > obj->size = size; > > + if (max_input_object_size && size > max_input_object_size) > > + die(_("object exceeds maximum allowed size ")); > > > > switch (obj->type) { > > case OBJ_REF_DELTA: > > Here obj->size is the inflated payload size of a single entry in the > packfile. If it happens to be represented as a base object > (i.e. without delta, just deflated), it would be close to the size > of the blob in the working tree (but LF->CRLF conversion and the > like may further inflate it), but if it is a delta object, this size > is just the size of the delta data we feed patch_delta() with, and > has no relevance to the actual "file size". > > Sure, it is called max_INPUT_object_size and we can say we are not > limiting the final disk size, and that might be a workable excuse > to check based on the obj->size here, but then its usefulness from > the point of view of end users, who decide to set the variable to > limit "some" usage, becomes dubious. Just like what I replied to Ævar, if the max_input_object_size is greater than core.bigFileThreshold, is it save to save the size here is almost the actual "file size"? BTW, Han Xin will continue to resolvie the OOM issue found in "unpack_non_delta_entry()" after our Nation Day holiday. -- Jiang Xin