Re: [PATCH v2 3/3] http: automatically retry some requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 13, 2020 at 01:17:29PM -0600, Sean McAllister wrote:

> Some HTTP response codes indicate a server state that can support
> retrying the request rather than immediately erroring out.  The server
> can also provide information about how long to wait before retries to
> via the Retry-After header.  So check the server response and retry
> some reasonable number of times before erroring out to better accomodate
> transient errors.
> 
> Exiting immediately becomes irksome when pulling large multi-repo code
> bases such as Android or Chromium, as often the entire fetch operation
> has to be restarted from the beginning due to an error in one repo. If
> we can reduce how often that occurs, then it's a big win.

I had hoped that libcurl might have some retry mechanisms, since the
curl command-line tool has several --retry-* options. But it looks like
that is all only at the tool level, and the library code doesn't know
anything about it. So we are stuck driving the process ourselves.

I do think you could be leveraging CURLINFO_RETRY_AFTER rather than
implementing your own header parsing, though.

>  static int http_request(const char *url,
>  			void *result, int target,
>  			const struct http_get_options *options)
>  {

It looks like you trigger retries only from this function. But this
doesn't cover all http requests that Git makes. That might be sufficient
for your purposes (I think it would catch all of the initial contact),
but it might not (it probably doesn't cover subsequent POSTs for fetch
negotiation nor pack push; likewise I'm not sure if it covers much of
anything after v2 stateless-connect is established).

>  	struct active_request_slot *slot;
>  	struct slot_results results;
> -	struct curl_slist *headers = http_copy_default_headers();
> +	struct curl_slist *headers;

So here we stop copying the headers at the top of the function...

> [...]
> +retry:
> [...]
> +	headers = http_copy_default_headers();
>  	if (accept_language)
>  		headers = curl_slist_append(headers, accept_language);

And instead set them up totally here. Which make some sense, because we
wouldn't want to append accept_language over and over. But who frees the
old ones? There is a call to curl_slist_free_all(headers) later in the
function, but it's after your "goto retry". So I think each retry would
leak another copy of the list.

The ideal thing would probably be to create the header list once, and
then use it for each retry. That would require reordering some of the
setup. If that's too much, then it would be OK to just create a new list
from scratch on each call. Though in the latter case I suspect it may be
simpler to wrap the whole function, like:

  static int http_request(...)
  {
	int try;
	int result;
	for (try = 0; try < max_retries; i++) {
		result = http_request_try(...);
		if (...result is not retryable...)
			break;
	}
	return result;
  }

and then we'd know that the single-try function just needs to be
self-contained, without worrying about gotos jumping around in it.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux