On Wed, Aug 28, 2013 at 11:08:02PM +0000, Pyeron, Jason J CTR (US) wrote: > We have systems hosting git which are behind proxies, and unless the > client sets the http.postBuffer to a large size they connections > fails. > > Is there a way to set this on the server side? If not would a patch be > possible to fix this? What would it mean to set it on the server? It is the size at which the client decides to use a "chunked" transfer-encoding rather than buffering the whole output to send at once. So you'd want to figure out why the server is upset about the chunked encoding. > jason.pyeron@hostname /home/jason.pyeron/desktop/projectname > $ git push remote --all > Username for 'https://server.fqdn': > Password for 'https://jpyeron@xxxxxxxxxxx': > Counting objects: 1820, done. > Delta compression using up to 4 threads. > Compressing objects: 100% (1276/1276), done. > error: RPC failed; result=22, HTTP code = 411 > fatal: The remote end hung up unexpectedly > Writing objects: 100% (1820/1820), 17.72 MiB | 5.50 MiB/s, done. > Total 1820 (delta 527), reused 26 (delta 6) > fatal: The remote end hung up unexpectedly The server (or the proxy) returns 411, complaining that it didn't get a Content-Length header. That's because the git http client doesn't know how big the content is ahead of time (and that's kind of the point of chunked encoding; the content is streamed). > jason.pyeron@hostname /home/jason.pyeron/desktop/projectname > $ git config http.postBuffer 524288000 > > jason.pyeron@hostname /home/jason.pyeron/desktop/projectname > $ git push remote --all > Username for 'https://server.fqdn': > Password for 'https://jpyeron@xxxxxxxxxxx': > Counting objects: 1820, done. > Delta compression using up to 4 threads. > Compressing objects: 100% (1276/1276), done. > Writing objects: 100% (1820/1820), 17.72 MiB | 11.31 MiB/s, done. > Total 1820 (delta 519), reused 26 (delta 6) > To https://server.fqdn/git/netasset-portal/ > * [new branch] master -> master And here you've bumped the buffer to 500MB, so git will potentially buffer that much in memory before sending anything. Which works for your 17MB packfile, as we buffer the whole thing and then send the exact size ahead of time, appeasing the proxy. But there are two problems I see with just bumping the postBuffer value: 1. You've just postponed the problem. The first 501MB push will fail again. You can bump it higher, but you may eventually hit a point where your buffer is too big to fit in RAM. 2. You've lost the pipelining. With a small postBuffer, we are streaming content up to the server as pack-objects generates it. But with a large buffer, we generate all of the content, then start sending the first byte (notice how the progress meter, which is generated by pack-objects, shows twice as fast in the second case. It is not measuring the network at all, but is streaming into git-remote-https's buffer). If the server really insists on a content-length header, then we can't ever fix (2). But we could fix (1) by spooling the packfile to disk and then sending from there (under the assumption that you have way more temporary disk space than RAM). However, if you have control of the proxies, the best thing would be to tweak its config to stop complaining about a lack of content-length header (at least in cases where you're getting a "chunked" content-transfer-encoding). That would solve both issues (and without clients having to change anything). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html