Re: [PATCH] fast-import: Stream very large blobs directly to pack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Shawn O. Pearce" <spearce@xxxxxxxxxxx> writes:

> +static void stream_blob(
> +	uintmax_t len,
> +	unsigned char *sha1out,
> +	uintmax_t mark)

A funny way to indent and line wrap...

> +{
> + ...
> +	/* Determine if we should auto-checkpoint. */
> +	if ((pack_size + 60 + len) > max_packsize
> +		|| (pack_size + 60 + len) < pack_size)
> +		cycle_packfile();

What's "60" in this math?

If the data is not compressible, we could even grow and the end result
might be more than (pack_size + len), busting max_packsize.  As we are
streaming out, we cannot say "oops, let me try again after truncating and
closing the current file and then opening a new file", and instead may
have to copy the data from the current one to a new one, and truncate the
current one.  Is this something worth worrying about?
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]