On Fri, Oct 11, 2013 at 3:31 PM, brian m. carlson <sandals@xxxxxxxxxxxxxxxxxxxx> wrote: > On Thu, Oct 10, 2013 at 01:14:28AM -0700, Shawn Pearce wrote: >> Even if you want to live in the fairy land where all servers support >> 100-continue, I'm not sure clients should pay that 100-160ms latency >> penalty during ancestor negotiation. Do 5 rounds of negotiation and >> its suddenly an extra half second for `git fetch`, and that is a >> fairly well connected client. Let me know how it works from India to a >> server on the west coast of the US, latency might be more like 200ms, >> and 5 rounds is now 1 full second of additional lag. > > There shouldn't be that many rounds of negotiation. HTTP retrieves the > list of refs over one connection, and then performs the POST over > another two. Why two connections? This should be a single HTTP connection with HTTP Keep-Alive semantics allowing the same TCP stream and the same SSL stream to be used for all requests. Which is nearly equivalent to SSH. Where SSH wins is the multi_ack protocol allowing the server to talk while the client is talking. > Regardless, you should be using SSL over that connection, > and the number of round trips required for SSL negotiation in that case > completely dwarfs the overhead for the 100 continue, especially since > you'll do it thrice (even though the session is usually reused). The > efficient way to do push is SSH, where you can avoid making multiple > connections and reuse the same encrypted connection at every stage. SSH setup is also not free. Like SSL its going to require a round trip or two on top of what Git needs. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html