Jeff King <peff@xxxxxxxx> writes: > On Thu, Jul 04, 2024 at 09:23:28PM +0000, brian m. carlson wrote: > >> > Buffering the entire thing will break because ...? Deadlock? Or is >> > there anything more subtle going on? >> >> When we use the smart HTTP protocol, the server sends keep-alive and >> status messages as one of the data streams, which is important because >> (a) the user is usually impatient and wants to know what's going on and >> (b) it may take a long time to pack the data, especially for large >> repositories, and sending no data may result in the connection being >> dropped or the client being served a 500 by an intermediate layer. We >> know this does happen and I've seen reports of it. > > Additionally, I think for non-HTTP transports (think proxying ssh > through socat or similar), buffering the v0 protocol is likely a total > disaster. The fetch protocol assumes both sides spewing at each other in > real time. Yeah, beyond one "window" that a series of "have"s are allowed to be in flight, no further "have"s are sent before seeing an "ack/nack" response, so if you buffer too much, they can deadlock fairly easily. > ... So I'm OK > with just telling people "make sure your proxies aren't buffering" as a > general rule, rather than trying to get into the nitty gritty of what is > going to break and how. Sounds fair. Thanks.