Re: [PATCH v2 3/3] http: automatically retry some requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-10-13 at 21:14:53, Jeff King wrote:
> On Tue, Oct 13, 2020 at 01:17:29PM -0600, Sean McAllister wrote:
> >  static int http_request(const char *url,
> >  			void *result, int target,
> >  			const struct http_get_options *options)
> >  {
> 
> It looks like you trigger retries only from this function. But this
> doesn't cover all http requests that Git makes. That might be sufficient
> for your purposes (I think it would catch all of the initial contact),
> but it might not (it probably doesn't cover subsequent POSTs for fetch
> negotiation nor pack push; likewise I'm not sure if it covers much of
> anything after v2 stateless-connect is established).

Yeah, I was about to mention the same thing.  It looks like we cover
only a subset of requests.  Moreover, I think this feature is going to
practically fail in some cases and we need to either document that
clearly or abandon this effort.

In remote-curl.c, we have post_rpc, which does a POST request to upload
data for a push.  However, if the data is larger than the buffer, we
stream it using chunked transfer-encoding.  Because we're reading from a
pipe, that data cannot be retried: the pack-objects stream will have
ended.

That's why we have code to force Expect: 100-continue for Kerberos
(Negotiate): it can require a 401 response from the server with valid
data in order to send a valid Authorization header, and without the 100
Continue response, we'd have uploaded all the data just to get the 401
response, leading to a failed push.

The only possible alternative to this is to increase the buffer size
(http.postBuffer) and I definitely don't want to encourage people to do
that.  People already get the mistaken idea that that's a magic salve
for all push problems and end up needlessly allocating gigabytes of
memory every time they push.  Encouraging people to waste memory because
the server might experience a problem puts the costs of unreliability on
the users instead of on the server operators where it belongs.

So the only sane thing to do here is to make this operation work only
for fetch requests, since they are the only thing that can be safely
retried in the general case without consuming excessive resources.  As a
result, we may want to add appropriate tests for the push case that we
don't retry those requests.
-- 
brian m. carlson (he/him or they/them)
Houston, Texas, US

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux