On Wed, Oct 14, 2020 at 05:04:10PM -0700, Jonathan Nieder wrote: > > Some large projects (Android, Chrome), use git with a distributed > > backend to feed changes to automated builders and such. We can > > actually get into a case where DDOS mitigation kicks in and 429s start > > going out. In that case I think it's pretty important that we honor > > the Retry-After field so we're good citizens and whoever's running the > > backend service has some options for traffic shaping to manage load. > > In general you're right it doesn't matter _that_ much but in at least > > the specific case I have in my head, it does. > > I see. With Peff's proposal, the backend service could still set > Retry-After, and *modern* machines with new enough libcurl would still > respect it, but older machines[*] would have to use some fallback > behavior. > > I suppose that fallback behavior could be to not retry at all. That > sounds appealing to me since it would be more maintainable (no custom > header parsing) and the fallback would be decreasingly relevant over > time as users upgrade to modern versions of libcurl and git. What do > you think? Yeah, the good-citizen behavior would be to err on the side of not retrying. And actually, I kind of like that better anyway. The retryable_code() list has things like 502 in it, which aren't necessarily temporary errors. If the server didn't give us a hint of when to retry, perhaps it's not a good idea to do so. That's slightly orthogonal to the CURLINFO_RETRY_AFTER question. It would mean an older Git would not retry in this case. But if you're primarily interested in fixing automated builders overloading your system, it's probably not that big a deal to make sure they're up to date (after all, they need the new Git, too ;) ). If you're hoping to help random developers on various platforms, then making the feature work with older curl is more compelling. Many people might upgrade their Git version there, but be stuck with an older system libcurl. -Peff