On Wed, Oct 14, 2020 at 1:55 PM Jeff King <peff@xxxxxxxx> wrote: > > On Tue, Oct 13, 2020 at 01:17:29PM -0600, Sean McAllister wrote: > > > +/* > > + * check for a retry-after header in the given headers string, if found, then > > + * honor it, otherwise do an exponential backoff up to the max on the current > > + * delay > > +*/ > > +static int http_retry_after(const struct strbuf headers, int cur_delay_sec) > > +{ > > + int delay_sec; > > + char *end; > > + char* value = http_header_value(headers, "retry-after"); > > + > > + if (value) { > > + delay_sec = strtol(value, &end, 0); > > + free(value); > > + if (*value && *end == '\0' && delay_sec >= 0) { > > This looks at the contents of the just-freed "value" memory block. > Sure does, fixed in v3, disappointed that electric fence didn't catch this for me... > > + if (delay_sec > http_max_delay_sec) { > > + die(Q_("server requested retry after %d second," > > + " which is longer than max allowed\n", > > + "server requested retry after %d seconds," > > + " which is longer than max allowed\n", delay_sec), > > + delay_sec); > > + } > > + return delay_sec; > > I guess there's no point in being gentle here. We could shrink the retry > time to our maximum allowed, but the server just told us not to bother. > But would this die() mask the actual http error we encountered, which is > surely the more interesting thing for the user? > > I wonder if it needs to be returning a "do not bother retrying" value, > which presumably would cause the caller to propagate the real failure in > the usual way. > I've moved this check up a couple levels in v3, so that if we get too large a retry value, then we'll print this message as a warning and quit retrying, which will unmask the underlying HTTP error. > > static int http_request(const char *url, > > void *result, int target, > > const struct http_get_options *options) > > { > > struct active_request_slot *slot; > > struct slot_results results; > > - struct curl_slist *headers = http_copy_default_headers(); > > + struct curl_slist *headers; > > struct strbuf buf = STRBUF_INIT; > > + struct strbuf result_headers = STRBUF_INIT; > > This new result_headers strbuf is filled in for every request, but I > don't think us ever releasing it (whether we retry or not). So I think > it's leaking for each request. > > It sounds like you're going to rework this to put the retry loop outside > of http_request(), so it may naturally get fixed there. But I thought it > worth mentioning. > Yes good catch, the new http_request_try version fixes it as a matter of course. > > + curl_easy_setopt(slot->curl, CURLOPT_HEADERDATA, &result_headers); > > + curl_easy_setopt(slot->curl, CURLOPT_HEADERFUNCTION, fwrite_buffer); > > After looking at your parsing code, I wondered if there was a way to > just get a single header out of curl. But according to the documentation > for CURLOPT_HEADERFUNCTION, it will pass back individual lines anyway. > Perhaps it would be simpler to have the callback function understand > that we only care about getting "Retry-After". > > The documentation says it doesn't support header folding, but that's > probably OK for our purposes. It's deprecated, and your custom parsing > doesn't handle it either. :) And most importantly, we won't misbehave > terribly if we see it in the wild (we'll just ignore that header). > I'll put this in my todo pile to think on a little, it'd be nice not to have expand the strbuf with every request, but also not a huge overhead. > -Peff