Re: [PATCHv4 09/10] pack-objects: Estimate pack size; abort early if pack size limit is exceeded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, May 22, 2011 at 17:52, Johan Herland <johan@xxxxxxxxxxx> wrote:
> Currently, when pushing a pack to the server that has specified a pack size
> limit, we don't detect that we exceed that limit until we have already
> generated (and started transmitting) that much pack data.
>
> Ideally, we should be able to predict the approximate pack size _before_ we
> start generating and transmitting the pack data, and abort early if the
> estimated pack size exceeds the pack size limit.
>
> This patch tries to provide such an estimate: It looks at the objects that
> are to be included in the pack, and for already-packed objects, it assumes
> that their compressed in-pack size is a good estimate of how much they will
> contribute to the pack currently being generated. This assumption should be
> valid as long as the objects are reused as-is.

This looks good to me.

> I'm not really happy with excluding loose objects in the pack size
> estimate. However, the size contributed by loose objects varies wildly
> depending on whether a (good) delta is found. Therefore, any estimate
> done at an early stage is bound to be wildly inaccurate. We could maybe
> use some sort of absolute minimum size per object instead, but I
> thought I should publish this version before spending more time futzing
> with it...
>
> A drawback of not including loose objects in the pack size estimate,
> is that pushing loose objects is a very common use case (most people
> push more often than they 'git gc'). However, for the pack sizes that
> servers are most likely to refuse (hundreds of megabytes), most of
> those objects will probably already be packed anyway (e.g. by
> 'git gc --auto'), so I still hope the pack size estimate will be useful
> when it really matters.

That is my impression too. Most servers using this feature will
probably put a limit of at least 10MB. Once you get into the 25-100M
range, the client probably has already packed the bulk of that
content. Especially if we also have Junio's new stream large blobs to
packs during git add patch. So as you point out, cases where this is
mostly useful (really huge push) this is likely to still trigger
correctly.

We can still get a tighter estimate if we wanted to. I wouldn't mix it
into this patch, but make a new one on top of it. During delta
compression we hold onto deltas, or at least compute and retain the
size of the chosen delta. We could re-check the pack size after the
Compressing phase by including the delta sizes in the estimate, and if
we are over, abort before writing.

For non-delta, non-reuse we may be able to guess by just using the
loose object size. The loose object is most likely compressed at the
same compression ratio as the outgoing pack stream will use, so a
deflate(inflate(loose)) cycle is going to be very close in total bytes
used. If we over shoot the limit by more than some fudge factor (say
8K in 1M limit or 0.7%), abort before writing.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]