"H. Peter Anvin" <hpa@xxxxxxxxx> wrote: > Shawn O. Pearce wrote: >> Chunked Transfer Encoding >> ------------------------- >> >> For performance reasons the HTTP/1.1 chunked transfer encoding is >> used frequently to transfer variable length objects. This avoids >> needing to produce large results in memory to compute the proper >> content-length. > > Note: you cannot rely on HTTP/1.1 being supported by an intermediate > proxy; you might have to handle HTTP/1.0, where the data is terminated > by connection close. Well, that proxy is going to be crying when we upload a 120M pack during a push to it, and it buffers the damn thing to figure out the proper Content-Length so it can convert an HTTP/1.1 client request into an HTTP/1.0 request to forward to the server. That's just _stupid_. But from the client side perspective the chunked transfer encoding is used only to avoid generating in advance and producing the content-length header. I fully expect the encoding to disappear (e.g. in a proxy, or in the HTTP client library) before any sort of Git code gets its fingers on the data. Hence to your other remark, I _do not_ rely upon the encoding boundaries to remain intact. That is why there is Git pkt-line encodings inside of the HTTP data stream. We can rely on the pkt-line encoding being present, even if the HTTP chunks were moved around (or removed entirely) by a proxy. -- Shawn. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html