RE: [PATCH v3] http.postbuffer: allow full range of ssize_t values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Jeff King [mailto:peff@xxxxxxxx]
> Sent: Saturday, April 1, 2017 2:01 AM
> To: David Turner <David.Turner@xxxxxxxxxxxx>
> Cc: git@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH v3] http.postbuffer: allow full range of ssize_t values
> 
> On Fri, Mar 31, 2017 at 01:26:31PM -0400, David Turner wrote:
> 
> > Unfortunately, in order to push some large repos, the http postbuffer
> > must sometimes exceed two gigabytes.  On a 64-bit system, this is OK:
> > we just malloc a larger buffer.
> 
> I'm still not sure why a 2GB post-buffer is necessary. It sounds like something
> is broken in your setup. Large pushes should be sent chunked.
> 
> I know broken setups are a fact of life, but this feels like a really hacky work-
> around.

I'm not sure what other workaround I should use.  I guess I could do multiple pushes, but only if individual objects are under the size limit, and I'm not sure all of mine are (maybe I'll get lucky tho).  I know that this is a configuration issue with gitlab: https://gitlab.com/gitlab-org/gitlab-ce/issues/30315 but I don't know when that will get fixed.  I could manually copy the repo to the server and do a local push, but I don't know that I have the necessary permissions to do that. Or I could do this, which would hopefully actually solve the problem.

> > diff --git a/cache.h b/cache.h
> > index fbdf7a815a..5e6747dbb4 100644
> > --- a/cache.h
> > +++ b/cache.h
> > @@ -1900,6 +1900,7 @@ extern int git_parse_maybe_bool(const char *);
> > extern int git_config_int(const char *, const char *);  extern int64_t
> > git_config_int64(const char *, const char *);  extern unsigned long
> > git_config_ulong(const char *, const char *);
> > +extern ssize_t git_config_ssize_t(const char *, const char *);
> 
> For most of our other "big" values we use git_config_ulong(). E.g.,
> core.bigfilethreshold. I suspect that would be fine for your purposes here,
> though using size_t is more correct (on Windows "unsigned long" is still only
> 32 bits, even on 64-bit systems).
> 
> The ultimate fate of this number, though, is to be handed to:
> 
>   curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDSIZE, rpc->len);
> 
> where the final argument is interpreted as a long. So I suspect that on 64-bit
> Windows, setting http.postbuffer to "3G" would cause some kind of weird
> error (either a truncated post or some internal curl error due to the negative
> size, depending on how curl handles it).

Ah, so we would need to use CURLOPT_POSTFIELDSIZE_LARGE.  Will re-roll.

> 
> I think a "git_config_long()" would probably do everything correctly.
> 
> > +static int git_parse_ssize_t(const char *value, ssize_t *ret) {
> > +	ssize_t tmp;
> > +	if (!git_parse_signed(value, &tmp,
> maximum_signed_value_of_type(ssize_t)))
> > +		return 0;
> > +	*ret = tmp;
> > +	return 1;
> > +}
> 
> I saw the earlier iteration used a size_t, but you switched it after the compiler
> (rightfully) complained about the signedness. But I'm not sure why we would
> want ssize_t here instead of just using git_parse_unsigned().

It was originally signed.  I'm not sure why that was, but I figured it would be simpler to save the extra bit just in case.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]