Re: Last Call: <draft-snell-http-prefer-14.txt> (Prefer Header for HTTP) to Proposed Standard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How about the following change:

<snip>

  The "wait" preference can be used to establish an upper bound on the 
  length of time, in seconds, the client expects it will take the server 
  to process the request once it has been received. In the case that 
  generating a response will take longer than the time specified, 
  the server, or proxy, can choose to utilize an asynchronous processing 
  model by returning -- for example -- a "202 Accepted" response.

  ABNF:
    wait = "wait" BWS "=" BWS delta-seconds

  It is important to consider that there are many -- largely 
  unpredictable -- factors that can influence the amount of time it 
  takes a server to process a request. The period of time specified 
  is not intended to be treated as a strictly defined "hard limit" 
  but rather as a hint about the client's expectation.
            
  For example, a server receiving the following request might choose 
  to respond asynchronously if processing the request will take longer 
  than 10 seconds:

    POST /collection HTTP/1.1
    Host: example.org
    Content-Type: text/plain
    Prefer: return-asynch, wait=10
  
    {Data}

</snip>

On Fri, Oct 5, 2012 at 11:12 AM, Martin Thomson <martin.thomson@xxxxxxxxx> wrote:
On 5 October 2012 10:42, James M Snell <jasnell@xxxxxxxxx> wrote:
> I could drop the Date header recommendation altogether and stress in the
> text that good clock synchronization and predictable latency is required for
> the wait preference to be used effectively.

The feature is useful, I agree.  The problem is that - as defined -
the server needs to guess something about the times on the client in
order to implement this reliably.

Relying on clock synchronization is not realistic.  Even in controlled
environments, errors are commonplace.

Even the simple case shows a problem:
  a: client sends request
  b: server receives request
  c: time passes
  d: server responds to request
  e: client receives response

You require that the time be a measure of a->e.  The server has no way
to determine what that time is.

An alternative would be to make the requirement apply to b->d.  That
is something that the server has direct control over.  The client then
gains a little extra work, but at least they are in a position to
measure a->b + d->e.  In any case, with low or predictable latency, I
doubt that the addition of a->b + d->e will have any significant
impact on whether the information is useful to the client.  Especially
given that times are expressed in seconds, not microseconds.


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]