That works for me.
On Oct 5, 2012 2:18 PM, "Martin Thomson" <martin.thomson@xxxxxxxxx> wrote:
An improvement. Though the following doesn't read quite right to me:
It is important to consider that there are many -- largely
unpredictable -- factors that can influence the amount of time it
takes a server to process a request. The period of time specified
is not intended to be treated as a strictly defined "hard limit"
but rather as a hint about the client's expectation.
I'm not sure that it's "mysterious forces" at the server that are the
issue in this design. Nor is it the case that this would ever be a
hard limit. As a preference, it's always option. How about:
Messages spend some time traversing the network and being
processed by intermediaries. This increases the time that a
client awaits a response in addition to any time spent at a
server. A client that has strict timing requirements can estimate
these factors and adjust the wait value accordingly.
Note that "intermediaries" could include the HTTP stack in both client
and server.
And maybe:
As with other preferences, the wait preference could be ignored.
Clients can abandon requests that take longer than they are
prepared to wait.
--Martin
On 5 October 2012 11:28, James M Snell <jasnell@xxxxxxxxx> wrote:
> How about the following change:
>
> <snip>
>
> The "wait" preference can be used to establish an upper bound on the
> length of time, in seconds, the client expects it will take the server
> to process the request once it has been received. In the case that
> generating a response will take longer than the time specified,
> the server, or proxy, can choose to utilize an asynchronous processing
> model by returning -- for example -- a "202 Accepted" response.
>
> ABNF:
> wait = "wait" BWS "=" BWS delta-seconds
>
> It is important to consider that there are many -- largely
> unpredictable -- factors that can influence the amount of time it
> takes a server to process a request. The period of time specified
> is not intended to be treated as a strictly defined "hard limit"
> but rather as a hint about the client's expectation.
>
> For example, a server receiving the following request might choose
> to respond asynchronously if processing the request will take longer
> than 10 seconds:
>
> POST /collection HTTP/1.1
> Host: example.org
> Content-Type: text/plain
> Prefer: return-asynch, wait=10
>
> {Data}
>
> </snip>
>
> On Fri, Oct 5, 2012 at 11:12 AM, Martin Thomson <martin.thomson@xxxxxxxxx>
> wrote:
>>
>> On 5 October 2012 10:42, James M Snell <jasnell@xxxxxxxxx> wrote:
>> > I could drop the Date header recommendation altogether and stress in the
>> > text that good clock synchronization and predictable latency is required
>> > for
>> > the wait preference to be used effectively.
>>
>> The feature is useful, I agree. The problem is that - as defined -
>> the server needs to guess something about the times on the client in
>> order to implement this reliably.
>>
>> Relying on clock synchronization is not realistic. Even in controlled
>> environments, errors are commonplace.
>>
>> Even the simple case shows a problem:
>> a: client sends request
>> b: server receives request
>> c: time passes
>> d: server responds to request
>> e: client receives response
>>
>> You require that the time be a measure of a->e. The server has no way
>> to determine what that time is.
>>
>> An alternative would be to make the requirement apply to b->d. That
>> is something that the server has direct control over. The client then
>> gains a little extra work, but at least they are in a position to
>> measure a->b + d->e. In any case, with low or predictable latency, I
>> doubt that the addition of a->b + d->e will have any significant
>> impact on whether the information is useful to the client. Especially
>> given that times are expressed in seconds, not microseconds.
>
>