Re: Issues with git clone over HTTP/2 and closed connections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Sep 23, 2023 at 12:58:09PM +0000, David Härdeman wrote:

> By running "GIT_CURL_VERBOSE=1 git clone https://example.com/myrepo.git";, I noticed that:
> 
>   a) HTTP/2 was being used; and
>   b) just before the error the server returned a GOAWAY [1]:
>      "== Info: received GOAWAY, error=0, last_stream=1999"
> 
> On the client side I'm using Debian Unstable (libcurl 8.3.0, git
> 2.40.1), and the server is running Debian Stable (nginx 1.22.1-9).
> 
> nginx will, by default, close HTTP/2 connections after
> "http2_max_requests", (default: 1000, i.e. 1999 streams, note that the
> error message above says last_stream=1999) and it seems that it is
> using GOAWAY to do so, which seems to confuse git/libcurl.
> 
> And sure enough, after running "git config --global http.version
> HTTP/1.1" on the client and trying again, the "git clone" was
> successful (I'm guessing I could/should also bump http2_max_requests
> on the server).

Thanks for a detailed report. Your analysis all makes sense to me.

> From what I understand, git should close the connection, try to open a
> new one and resume the clone operation before erroring out (because
> the GOAWAY message could mean anything).
> 
> Is this a known bug and is it something that would need to be fixed in
> libcurl or in git?

I don't think we've heard of such a problem before with Git. I don't
know enough about GOAWAY to comment on the correct behavior, but this is
almost certainly a curl issue, not a Git one. All of the connection
handling, reuse, etc, is happening invisibly at the curl layer.

It's probably worth poking around libcurl's issue tracker. This seems
like it might be related:

  https://github.com/curl/curl/issues/11859

And one final comment: 2000 is a lot of requests for one clone. That
plus the error you are seeing from Git makes me think you're using the
"dumb" http protocol (i.e., your webserver is not set up to run the
server side of Git's smart protocol, so it is just serving files
blindly).

I don't know if using it is intentional or not. But the smart protocol
is much more efficient, and in general I would expect it to have fewer
corner cases (none of the major forges allow dumb-http at all).

You can find more details on setting it up in "git help http-backend".

If you do want to keep using the dumb protocol, consider running "git
gc" on the server side repository. 2000 requests implies you have many
loose objects, which could be served much more efficiently as a single
pack.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux