Re: [PATCH v3 0/4] Additional FAQ entries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 04, 2024 at 09:23:28PM +0000, brian m. carlson wrote:

> > Buffering the entire thing will break because ...?  Deadlock?  Or is
> > there anything more subtle going on?
> 
> When we use the smart HTTP protocol, the server sends keep-alive and
> status messages as one of the data streams, which is important because
> (a) the user is usually impatient and wants to know what's going on and
> (b) it may take a long time to pack the data, especially for large
> repositories, and sending no data may result in the connection being
> dropped or the client being served a 500 by an intermediate layer.  We
> know this does happen and I've seen reports of it.

Additionally, I think for non-HTTP transports (think proxying ssh
through socat or similar), buffering the v0 protocol is likely a total
disaster. The fetch protocol assumes both sides spewing at each other in
real time.

HTTP, even v0, follows a request/response model, so we're safer there. I
do think some amount of buffering is often going to be OK in practice.
You'd get delayed keep-alives and progress reports, which may range from
"annoying" to "something in the middle decided to time out". So I'm OK
with just telling people "make sure your proxies aren't buffering" as a
general rule, rather than trying to get into the nitty gritty of what is
going to break and how.

-Peff




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux