Re: [PATCH] fast-import: Stream very large blobs directly to pack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Shawn O. Pearce" <spearce@xxxxxxxxxxx> writes:

> In my v3 patch I thought I replaced this code with:
>
> +               else if (!prefixcmp(a, "--big-file-threshold=")) {
> +                       unsigned long v;
> +                       if (!git_parse_ulong(a + 21, &v))
> +                               usage(fast_import_usage);
> +                       big_file_threshold = v;
>
> So we relied on git_parse_ulong to handle unit suffixes as well.

Yeah, you did; but it didn't carried through the merge process across the
code restructure to add "option_blah" stuff.  Sorry about that.

Looking at the output from

    $ git grep -n -e ' \* 1024 \* 1024' -- '*.c'

I noticed another issue.  Don't we need the same thing for max_packsize?
Or is that _too much_ of a backward incompatible change that we should
wait until 1.7.1?

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]