Re: [PATCH 1/2] http: add option to enable 100 Continue responses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 9, 2013 at 6:35 PM, brian m. carlson
<sandals@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Wed, Oct 09, 2013 at 05:37:42PM -0400, Jeff King wrote:
>> On Wed, Oct 09, 2013 at 02:19:36PM -0700, Shawn O. Pearce wrote:
>> > 206b099 was written because the Google web servers for
>> > android.googlesource.com and code.google.com do not support
>> > 100-continue semantics. This caused the client to stall a full 1
>> > second before each POST exchange. If ancestor negotiation required
>> > O(128) have lines to be advertised I think this was 2 or 4 POSTs,
>> > resulting in 2-4 second stalls above the other latency of the network
>> > and the server.
>>
>> Yuck.
>
> Shame on Google.  Of all people, they should be able to implement HTTP
> 1.1 properly.

Heh. =)

If a large enough percentage of users are stuck behind a proxy that
doesn't support 100-continue, it is hard to rely on that part of HTTP
1.1. You need to build the work-around for them anyway, so you might
as well just make everyone use the work-around and assume 100-continue
does not exist.

100-continue is frequently used when there is a large POST body, but
those suck for users on slow or unstable connections. Typically the
POST cannot be resumed where the connection was broken. To be friendly
to users on less reliable connections than your gigabit office
ethernet, you need to design the client side with some sort of
chunking and gracefully retrying. So Git is really doing it all wrong.
:-)

Properly using 100-continue adds a full RTT to any request using it.
If the RTT time for an end-user to server is already 100-160ms on the
public Internet, using 100-continue just added an extra 160ms of
latency to whatever the operation was. That is hardly useful to
anyone. During that RTT the server has resources tied up associated
with that client connection. For your 10-person workgroup server this
is probably no big deal; at scale it can be a daunting additional
resource load.

Etc.


Even if you want to live in the fairy land where all servers support
100-continue, I'm not sure clients should pay that 100-160ms latency
penalty during ancestor negotiation. Do 5 rounds of negotiation and
its suddenly an extra half second for `git fetch`, and that is a
fairly well connected client. Let me know how it works from India to a
server on the west coast of the US, latency might be more like 200ms,
and 5 rounds is now 1 full second of additional lag.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]