Re: [PATCH 1/2] http: add option to enable 100 Continue responses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 10, 2013 at 01:14:28AM -0700, Shawn Pearce wrote:
> If a large enough percentage of users are stuck behind a proxy that
> doesn't support 100-continue, it is hard to rely on that part of HTTP
> 1.1. You need to build the work-around for them anyway, so you might
> as well just make everyone use the work-around and assume 100-continue
> does not exist.

Well, the issue is that 100-continue is needed for functionality in some
cases, unless we want to restart the git-upload-pack command again or
force people to use outrageous sizes for http.postBuffer.  My preference
is generally to optimize for sane, standards-compliant behavior first,
and let the people with broken infrastructure turn on options to work
around that breakage.  I realize that git as a project is a little more
tolerant of people's myriad forms of breakage than I am personally.

Regardless, I have a reroll that leaves it disabled by default that I'll
send in a few minutes.

> 100-continue is frequently used when there is a large POST body, but
> those suck for users on slow or unstable connections. Typically the
> POST cannot be resumed where the connection was broken. To be friendly
> to users on less reliable connections than your gigabit office
> ethernet, you need to design the client side with some sort of
> chunking and gracefully retrying. So Git is really doing it all wrong.
> :-)

Yeah, there's been requests for resumable pull/push before.  The proper
way to do it would probably to send lots of little mini-packs that each
depend on the previous pack sent; if the connection gets reset, then at
least some of the data has been transferred, and negotiation would
restart the next time.  The number of SHA-1s sent during negotiation
would have to increase though, because you couldn't be guaranteed that
an entire ref would be able to be transferred each time.  Large blobs
would still be a problem, though, and efficiency would plummet.

> Even if you want to live in the fairy land where all servers support
> 100-continue, I'm not sure clients should pay that 100-160ms latency
> penalty during ancestor negotiation. Do 5 rounds of negotiation and
> its suddenly an extra half second for `git fetch`, and that is a
> fairly well connected client. Let me know how it works from India to a
> server on the west coast of the US, latency might be more like 200ms,
> and 5 rounds is now 1 full second of additional lag.

There shouldn't be that many rounds of negotiation.  HTTP retrieves the
list of refs over one connection, and then performs the POST over
another two.  Regardless, you should be using SSL over that connection,
and the number of round trips required for SSL negotiation in that case
completely dwarfs the overhead for the 100 continue, especially since
you'll do it thrice (even though the session is usually reused).  The
efficient way to do push is SSH, where you can avoid making multiple
connections and reuse the same encrypted connection at every stage.

-- 
brian m. carlson / brian with sandals: Houston, Texas, US
+1 832 623 2791 | http://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: RSA v4 4096b: 88AC E9B2 9196 305B A994 7552 F1BA 225C 0223 B187

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]