Hi Bob,
Thanks (again) for the review.
We've been working through the editorial issues and have cut release -11 to address the ones we agreed with. Responses to the technical issues you presented are in-line below:
On Tue, Nov 30, 2021 at 12:22 AM Bob Briscoe via Datatracker <noreply@xxxxxxxx> wrote:
Reviewer: Bob Briscoe
Review result: Almost Ready
This document has been reviewed as part of the transport area review team's
ongoing effort to review key IETF documents. These comments were written
primarily for the transport area directors, but are copied to the document's
authors and WG to allow them to address any issues raised and also to the IETF
discussion list for information.
When done at the time of IETF Last Call, the authors should consider this
review as part of the last-call comments they receive. Please always CC
tsv-art@xxxxxxxx if you reply to or forward this review.
Version reviewed:
Originally draft-ietf-httpbis-priority-09, but also checked -10 diff.
==Summary==
The move to an e2e request and hop-by-hop response is a good one.
I'm not sure how ready this is, until I see how the authors respond to my
questions about the interaction model and whether the client can say "I dunno,
you tell me" to the server (T#4c, T#5a and T#9a).
I think all of my other points are 'just' holes in the coverage of each aspect
of the protocol, but some will be quite involved to fill. There's a lot of
vagueness still to be tied down, I'm afraid.
Three sets of comments below:
* Gaps (G#): 2
* Technical points or suggested modifications to normative text (T#) 13
* Editorial points (E#) 29
______________________________________________________________
==Gaps==
G#1 Implementation status section?
This review would have really benefited from an implementation status section.
You will see I got suspicious that some of the sections had been written
without the benefit of any implementation or operational experience. While
others seemed stronger. If the implementation status had been written up, I
wouldn't have had to guess.
Here's my guesses at what has been implemented, given the waffle factor of the
relevant sections ;)
* e2e priority protocol protocol handlers and schedulers: most mature
* intermediary priority protocol handlers and schedulers: not so mature
* automated priority header generation, APIs: not so mature
* priority of server push, retransmissions, probes: just ideas in theory?
* investigation of deadlocks, loops, etc: early days.
The short answer is that HTTP prioritization is much of an optional thing. Signals are hints to the server in charge of the multiplexed connection. Servers have a self-interest in serving requests in a timely manner balanced against all the other needs like resource usage, DoS avoidance and so on. The sections that describe scheduling are some basic recommendations that will let clients have some expectations how responses would get prioritized should the stars align. But this is not a very specific algorithm that all implementations will follow exactly because rarely do stars align. The sections attempt to spell out the considerations arising from the protocols related to this draft. We believe the editorial changes made in -11 make it clearer where text is considerations more than authoritative direction on what to do.
G#2 Performance evaluation?
Priorities are about improving performance. This is a stds track draft about a
core IETF protocol. But there is nothing in this draft pointing to any studies
that quantify how much performance is improved (or not) by the different
aspects of the protocol. Ideally there would be a study comparing the HTTP/2
priority approach with this one. Is that because the studies don't exist, or
just an omission?
We simply overlooked citing them, which was noted in other reviews. We added links to work from Robin Marx and Pat Meenan that motivated this document and it's design features.
______________________________________________________________
==Technical Points and Modifications to Normative Statements==
§2. Motivation for Replacing RFC 7540 Priorities
T#2a) Grass is always greener on the other side of the fence?
RFC 7540 priority is expressed relative to other requests on the same
connection. Many requests are generated without knowledge of how
other requests might share a connection, which makes this difficult
to use reliably
This says why relative values were difficult, but it doesn't say why or whether
absolute values will be better. Is there lots of experience of absolute values
being sufficient and easier to use? or any cases where absolute values might be
insufficient? It seems obvious that at run-time you might hit the end of the
number space, i.e. have to pile up objects on the same urgency value at the
edge of the space when you really wanted some objects to have higher (or lower)
urgency. There is a mention of the possibility of creating an extension with
intermediate urgency values, but what does an implementation do when it hits
this problem in the middle of a session? It can't quickly jump out of run-time,
convene a design team to get a new scheme extension agreed then go back to
run-time and complete the session.
Dependencies between the requests in a connection are a property of the connection. This poses challenges to acting on priority signals when passing them to components/nodes that are not part of the connection. That's friction against some ways that HTTP commonly operates in practice. For example, intermediaries or servers that split or coalesce requests from different connections. The urgency and incremental parameters have been deemed sufficient for a web browsing use case, which was our focus, without being too fine grained and hard for developers to reason about. If other use cases encounter limitations or problems with this scheme, I do encourage them to bring that back to the HTTP WG so we can consider work on extensions that address them.
T#2b) Motivation for replacing 7540 included removal of dependencies?
The Security Considerations says that one of the motivations for replacing 7540
was that "Extensible priorities does not use dependencies, which avoids these
[resource loop DoS attack] issues." Draft-09 listed this as one of the
motivations in §2, but in draft-10 it has been removed from §2. If it is still
a motivation, it ought to be listed in §2, not just in Security Considerations.
Security Considerations seems to be in the commonly used style of just a list
of pointers to other parts of the draft. So it would be consistent to say this
in §2 not just in Security Considerations, which even says "Considerations are
presented to implementations, describing how..." as though the details are
elsewhere in the document.
Whatever, given this seems to have been an important motivation, please try to
describe this issue in a self-contained way, rather than talking obliquely in a
way that requires the reader to refer to the CERT advisory (e.g. "...some types
of problems...").
The text was written before the 7540bis activity started. We’ve been shifting bits and pieces of RFC 7540 problems to that venue. This one has been removed since it seems like a distraction for the topic of this document.
§2.1. Disabling RFC 7540 Priorities
T#2c) Incremental deployment
Two perhaps obvious but unstated things ought to be stated:
i) An HTTP session will always _function_ even if all priority information is
ignored; it just might perform badly.
ii) The semantics of the SETTINGS_NO_RFC7540_PRIORITIES setting is intended to
apply to both directions (if it is?). When it says
"A server that receives SETTINGS_NO_RFC7540_PRIORITIES with a value of 1 MUST
ignore HTTP/2 priority signals."
I assume after "MUST ignore" it intends to add "...and MUST NOT send...".
I assume this is stated as "server MUST ignore" rather than a protocol error,
because the HTTP/2 priority signals might have come from an intermediary that
doesn't understand the SETTINGS_NO_RFC7540_PRIORITIES setting.
Also, it is surely a protocol error if one endpoint sets
SETTINGS_NO_RFC7540_PRIORITIES to the opposite of the other. Or if a node sends
a header after it has said it won't.
If a client sets SETTINGS_NO_RFC7540_PRIORITIES to 1, but the server doesn't
understand this setting, and later sends HTTP/2 priority signals (perhaps in
response to an intermediary), what happens? [As I pointed out in my review of
RFC7540 Priorities (when it was a draft but after IESG approval), it wasn't
clear whether priority messages were only sent in the C-S direction, or also
the reverse. I didn't receive a reply on that point and the RFC is still not
clear. https://lists.w3.org/Archives/Public/ietf-http-wg/2015JanMar/0529.html ]
Francesca also mentioned this in the AD review. RFC750 seems to have allowed server-to-client signals on the wire but didn’t specify what at all anyone should do with them. I’m not aware of any cases of this signal being used in the wild. I created an issue on https://github.com/httpwg/http2-spec/issues/1000 there's further discussion there. 7540bis deprecates stream prioritization - all that is left is the remnant of bits on the wire that remain for wire compatibility.
SETTINGS_NO_RFC7540 is an optimization related to carriage, processing and application of signals. Because these signals are only a hint, as long as they have a valid wire format there is no need for protocol errors. Since we are focused on C->S direction of signal, and the world never defined what RFC7540 S->C signals really meant, our document doesn’t benefit from trying to speak about them. So we define our terms of use in the Notational Conventions and stick to them. The document consistently uses this term,
SETTINGS_NO_RFC7540 is an optimization related to carriage, processing and application of signals. Because these signals are only a hint, as long as they have a valid wire format there is no need for protocol errors. Since we are focused on C->S direction of signal, and the world never defined what RFC7540 S->C signals really meant, our document doesn’t benefit from trying to speak about them. So we define our terms of use in the Notational Conventions and stick to them. The document consistently uses this term,
§4. Priority Parameters
T#4a) Vagueness permeates what intermediaries do in this draft
Intermediaries can consume and produce priority signals in a
...PRIORITY_UPDATE frame or Priority header field.
...Replacing or adding a Priority header field overrides
any signal from a client and can affect prioritization for all
subsequent recipients.
* Do intermediaries really both consume and produce priority signals. Always?
In both directions? What does 'consume' mean (absorb and not forward, or read
and forward)?
They can according to HTTP Semantics.
* Can they really use either type of frame? Always?
They can, it depends on the versions of HTTP being used on the upstream or downstream.
* How does adding a priority header override any signal from a client? Or is it
only replacing that overrides?
My later comment asking for a more precise statement of the protocol's
interaction model ought to resolve these issues as well.
it all depends on the model about how HTTP intermediaries convert between versions. Frames are connection-level and headers might be e2e or hop-by-hop. This document shouldn’t litigate any more than it does on the matter.
T#4b) Really only C-S direction?
PRIORITY_UPDATE frame preserves the signal from the client, but...
...overrides any signal from a client...
Also used for S-C direction?
Given this part of the draft seems to have been written solely about the C-S
direction, perhaps it would be better to admit that is a good way to structure
the draft with C-S first. Then add another section about S-C, and perhaps
another about S-Int. The alternative of adding to all the definitions to cover
all directions and interactions, might become incomprehensible.
This is different from HTTP/2 PRIORITY frames (as mentioned above). By definition PRIORITY_UPDATE is only allowed in the C->S direction, which eliminates the need to document the reverse direction.
§4.2. Incremental
T#4c) Client doesn't always have prerequisite info to set incremental parameter
There will surely be cases where the MIME type of the response (and therefore
whether the client can render it incrementally) is not known, or cannot be
guessed by the client when it requests a resource, or when it starts content
negotiation? For instance, the client might have listed MIME types in its
Accept list, some of which are incremental, and some not.
The server can't override a client 'not incremental' message by stating that
the MIME type it has served is incremental. Because, when the client says 'not
incremental', that is intended to state the capability of the client, not the
format of the resource.
Perhaps the HTML that gave the client the hyperlink that was selected to get
the resource could also include a tag giving the MIME type of the hyperlinked
resource? Or perhaps the idea is that the client has to send a PRIORITY_UPDATE
once it knows the MIME type (by which time it might be too late)?
That's a fair point. Unfortunately, there will be cases where parties lack all of the information that could lead to perfect prioritization. Client priority signals are only a hint. Servers can and will do whatever they like, including serving the response in a way that does not follow the recommendations we provide for handling the incremental parameter in Section 10. There’s lot of additional means, outside of this specification, that clients and servers can use to augment their understanding of priority. There is no need to enumerate them in this document.
§5. The Priority HTTP Header Field
T#5a) Interaction model: an example or mandatory?
It would help to start by explaining (perhaps in the Intro, rather than §5)
whether a priority message about a response can be initiated by a server or
intermediary if there was not a priority field attached to the request from the
client. I believe the draft intends this not to be possible, although this is
not stated normatively anywhere, and I don't know why such a restriction would
be imposed.
Actually, I believe it is essential that the protocol allows the server to
initiate priority messages, as absence of a message is currently the only way
for the client to say "I have no idea, you decide". Otherwise, if the server is
only allowed to follow the client, when the server knows the best order to
serve the objects (which I believe is often the case), the client still has to
request non-incremental objects in some order or other, and give them some
priority or other. So the server doesn't know whether the client actually knows
what it is doing, or whether it is just making up an ordering because it has
to, even tho' it has no clue.
Alternatively, could the client send a Priority header with no parameters? This
would indicate that the client wants the server to prioritize, and to allow the
server to tell intermediaries what to prioritize. (For more about clueless
clients, see T#9a) "Client scheduling".)
The abstract gives the only outline of the interaction model, but it's not
clear whether this is just an example of common usage, or the only possible
model.
§5 just says the priority field can be used "when a request or response is
issued". It goes on to state that the priority field is an e2e signal, but then
in the next sentence talks about how intermediaries can combine priority info
from client requests and server responses (which reflects what §8 says as
well). So "e2e" is clearly an over-simplification. I think it's e2e in one
direction but hop-by-hop in the other (supported by the description in the
abstract), ie. client -> server -> intermediary/ies -> client. It's also
possible that intermediaries are intended to (or at least allowed to) read but
do not alter the messages in the C-S direction, otherwise, what would they
'combine' with the priority field coming from the other direction?
Whatever, the interaction model(s) is never stated precisely. I've classified
this as a technical point, not just editorial, because I couldn't really assess
the completeness of many other technical details of the draft without knowing
the answer to this fundamental question.
If this scheme is implemented at the server, then all requests are treated as if they have an associated priority signal. This can be a Priority header field (Note the serialization rules for Structured Fields Dictionary in Section 3.2 of RFC 8941 - sending an empty header is not permitted) or a PRIORITY_UPDATE frame. Omission of signals, or omission values in signals, invokes default value priority parameters urgency=3, incremental=false. A server is therefore able to determine the client’s view of the priority.
Editorial changes now in Section 10 should make it clearer that an HTTP server’s job is to respond in a timely manner. And it always has to decide how to use finite resources to do so. Clients can hint as some preference but if they don’t know or don’t care, it's basically delegating the responsibility to the server.
The purpose of the Priority header in responses is to allow origin servers (detached from the intermediaries’ connection to the client) the ability to also provide hints about prioritization.
The interaction model is described throughout the document, with a gist in the intro. Duplicating details into the intro does not seem beneficial.
Editorial changes now in Section 10 should make it clearer that an HTTP server’s job is to respond in a timely manner. And it always has to decide how to use finite resources to do so. Clients can hint as some preference but if they don’t know or don’t care, it's basically delegating the responsibility to the server.
The purpose of the Priority header in responses is to allow origin servers (detached from the intermediaries’ connection to the client) the ability to also provide hints about prioritization.
The interaction model is described throughout the document, with a gist in the intro. Duplicating details into the intro does not seem beneficial.
T#5b) Normative 'cannot'?
Clients cannot interpret the
appearance or omission of a Priority response header as
acknowledgement that any prioritization has occurred.
Was this intended to say 'Clents MUST NOT interpret...'?
Signals by design are just a hint. They can never be trusted and this sentence highlights that fact.
T#5c) Nothing said about caching and priority
The paragraph about caching and priority just ends having talked a bit about
caching but not about priority. It left me none the wiser about what a cache
ought to store about priority with the response. §13.8 talks about fairness
between multiple live connections in the presence of coalescing. But doesn't
the discussion of caching and priority here need to talk about what
must/should/may be stored about priority in a cache for later connections. Even
if it's implementation dependent, wouldn't it be worth a brief discussion (as
in the 2 paras below).
The priority of a response is the outcome of an interaction between the
client's original (e2e) priority combined with the server's logic about the
resource. If only the priority outcome is stored, then when another request
arrives at the cache from a different client, there will be no record of the
original client's priority. So the cache will not know what client priority
led to the priority stored with the response. And it will not know whether the
current client priority is the same or different.
On the other hand, if the cache stores the original client priority with the
response priority, then should it refer a request with a different (e2e) client
priority to the server, then store the new pair of priorities with the original
cached response? And I guess it could serve the request in parallel, rather
than waiting for the server to tell it whether to serve the request urgently
(!). This would probably scale reasonably well, given the likely small number
of different client priorities. But who knows how it would scale if the
parameter space is extended in future.
Answer supplied by Kazuho - As discussed in the last paragraph of section 5, CACHING defines if and how requests with different header field values can be mapped to one response. If the capabilities provided by CACHING (i.e. Vary) is too limited, then we should fix that as an extension to CACHING (as have been previously proposed as draft-ietf-httpbis-key).
In practice, re Extensible Priorities, IMO, there aren't many sensible combinations of urgency and incremental. Therefore, backend servers that want to tune priority based on the value that the client sends can simply send Vary: priority and call it a day.
In practice, re Extensible Priorities, IMO, there aren't many sensible combinations of urgency and incremental. Therefore, backend servers that want to tune priority based on the value that the client sends can simply send Vary: priority and call it a day.
§9. Client Scheduling
T#9a) Client doesn't have prerequisite info about content sizes and dependencies
Consider a web page example with a load of non-incremental objects for the
client to request. It doesn't know their sizes, and it doesn't know which ones
might contain references to further objects to request. So it requests A,B,C,D.
In retrospect, it turns out that C was huge, and D referred to further objects
to download. How was the client to know it should have downloaded D before C?
To be effective, a scheduler needs to know object sizes and which objects will
identify further objects to be requested (dependencies). * Size is known by the
server but not by the client, at least not until the headers at the start of
the object arrive. * Dependencies are known by the server, but not by the
client until an object starts to unfold.
Why is the client made to choose the priorities of the responses? It doesn't
know any of this vital metadata about these objects. It can guess from file
types that JS and HTML probably ought to come first. But it knows little else.
So, as I already said under my question T#5a) about the interaction model, the
most important capability the client must have is the ability to say "I dunno,
you decide". But that's the one thing this draft doesn't allow the client to do
(at least I think it doesn't? see T#5a). For a list of non-incremental objects,
even if the client gives all their requests the same urgency, it can't send all
the requests at the same time - it has to choose which order to send them in,
even if it has no clue. This tells the server to respond in that order OR to
choose a different order. But the server doesn't know whether the client chose
this order deliberately or just because it didn't know any better.
Alternatively, there will need to be some way for the server to tell the client
what to prioritize _before_ it sends its requests (e.g. using extensions to the
HTML in a base HTML document).
As noted in response to T#4c, we are constrained by the capabilities of information exchange that HTTP and its uses (such as the web) allows us. This is no different a problem than existed for RFC 7540. Only a client knows how it wants to use resources for which it has limited knowledge. If we use an HTML document as an example, the subresources have a dependency chain that may or may not change while they get loaded. It’s more likely that a client will request something with a particular priority because of its type and usage in a given HTML document, rather than its size (even if it knew that size). It’s going to be rare that a client doesn't have an opinion - if the client doesn’t know, the defaults are sufficient to let it reprioritize the request higher or lower if it finds out that need once the response is back.
But really this all comes down to making all the actors aware of the challenges and stating that priority signals are just hints in the decision making. If a client finds that the server is making scheduling choices when using defaults, then it is empowered to provide better signals.
The general problem is not solvable so I do not believe there is anything more we can add to the document.
But really this all comes down to making all the actors aware of the challenges and stating that priority signals are just hints in the decision making. If a client finds that the server is making scheduling choices when using defaults, then it is empowered to provide better signals.
The general problem is not solvable so I do not believe there is anything more we can add to the document.
§10. Server Scheduling
T#10a) Server push priority sounds like waffle
The discussion of priority for server push seems to say "This might not work".
If this conclusion is based on operational experience it ought to say so. And
if it's not, it ought to say that it's just conjecture.
The topic of server push is not helped by the fact that it’s deployment story, outside this I-D, is one of disappointment. But we’re stuck with that feature in HTTP/2 and HTTP/3 and a server that chooses to use it while implementing this scheme has to make some choices. There's no case where push will fail but there are cases that could cause it to perform badly. The text in paragraphs 10 and 11 provide considerations that a server that does implement server push will have to make (because things are _always_ contending for resource). That said, I don’t think we need operational experience to conclude that if you push things incorrectly, you could really hurt what the client is trying to achieve.
§12. Retransmission Scheduling
T#12a) Sounds like more waffle
Similarly, if retransmission scheduling and/or probe scheduling has limited
operational experience or limited usefulness, it would be better to say so,
rather than trying to sound authoritative without really saying anything.
Again, this is something a server has to do anyway and we want to present the considerations at play. Our editorial change makes it sound less authoritative by nixing the sentence containing “...its effectiveness can be further enhanced…”.
§13. Fairness
T#13a) Please define fair. Seriously.
A prerequisite question is, "What parameters does a server scheduler
manipulate?" The text implies the server can only control the order in which it
starts a response to each request, and whether responses start while other
responses are in progress or wait for their completion. If so, I'm not sure
what fairness could mean.
Presumably a server can also determine the relative rate at which it sends
different streams. And it could stall a stream to allow another absolute
priority. In this context, fairness might mean instantaneously equal rates. But
that's not fair if the objects are of different sizes.
So we genuinely do need to see a definition of what fairness means here.
Good point, we addressed this in issue 1819 - https://github.com/httpwg/http-extensions/issues/1819
T#13b) Why not make scheduling decisions across different clients?
As a general guideline, a server SHOULD NOT use priority information
for making scheduling decisions across multiple connections, unless
it knows that those connections originate from the same client.
Why does the IETF have anything to say about this? It's surely an operator
policy decision.
We disagree a bit with this but we discused some more on issue 1820 - https://github.com/httpwg/http-extensions/issues/1820
______________________________________________________________
==Editorial Comments==
General (all sections).
E#0a) Are precedence and priority interchangeable with urgency?
It's called a priority field, and the parameter is called urgency, but
sometimes the term priority or precedence is used to describe urgency. Please
go through the draft using priority or urgency consistently, and remove
precedence unless it's there for good reason.
§1. Introduction
E#1a) The "For example" in para 2 is actually the general point not the
example, whereas the example is in the previous para.
E#1b) The last para " The prioritization scheme and priority signals defined
herein can act as a substitute for RFC 7540 stream priority." would fit better
3 paras earlier, just after "HTTP/2 [HTTP2] has consequently deprecated the use
of these stream priority signals."
E#1c) It would help to state what the interaction model is for the priority
field. I believe it's not as simple as just "e2e" (see technical comment
earlier about §5). Although I've suggested that normative text about this ought
to be in the §5, it needs to be stated early on (probably in the intro, not
just the abstract), because the reader needs it to understand §§2 & 4.
§1.1. Notational Conventions
E#1d) Was HTTP/2 priority only from C-S?
The term HTTP/2 priority signal is used to describe the priority
information sent from clients to servers in HTTP/2 frames;
Neither RFC7540 nor the http2bis draft says that an HTTP/2 priority signal
cannot be sent by a server. It may be that this was the unstated intention but,
if it wasn't, the above sentence is incorrect.
§2. Motivation for Replacing RFC 7540 Priorities
E#2a) Not just absence
CURRENT:
compatibility (see Section 5.3.2 of [HTTP2]), which means that they
might still be used in the absence of alternative signaling, such as
the scheme this document describes.
PROPOSED:
compatibility (see Section 5.3.2 of [HTTP2]), because they
might still be used by other nodes.
REASONING:
7540 priority fields are not only used in the absence of alternative
signalling. They are used by the client in parallel to extensible priorities
before it receives the server's SETTINGS frame.
2.1.1. Advice when Using Extensible Priorities as the Alternative
E#2b)
CURRENT:
might be useful to nodes behind the server that the client is
directly connected to.
PROPOSED:
might be useful to a server behind the directly connected node.
REASONING:
Removes ambiguity - I initially read this as "nodes (behind the server) that"
rather than "...the server that".
§3. Applicability of the Extensible Priority Scheme
E#3a)
The priority scheme defined by this document considers only the
prioritization of HTTP messages and tunnels, see Section 9,
Section 10, and Section 11.
Is this sentence meant to be as mysterious as it sounds? If this document only
considers messages and tunnels, what else doesn't it consider? Is 'HTTP
messages' deliberately used instead of 'HTTP responses'? If it is wider than
just responses, then all the places in the draft where it says it is about http
responses need to be generalized (e.g. §4.1, §4.2, §8, §10 §14). But will this
make the draft so abstract that it becomes incomprehensible? Perhaps better to
explain here that it is applicable to a few other odd messages and tunnels, but
it is primarily about responses, so where the draft talks about responses, it
is not intended to preclude the other less-common cases.
It's not clear what the references to §§9,10,11 are there for. Are they further
info about tunnels (only §10 has one sentence on tunnels)? Or are they meant to
list all the sections about scheduling things? if so, why not also §12 on
scheduling retransmissions?
E#3b)
CURRENT:
they can also define how this priority scheme
can be applied.
PROPOSED:
they can also define how the present priority scheme
can be extended to support the new extension.
RATIONALE:
I think this is what was intended?
§4. Priority Parameters
E#4a) First use of reprioritize
Ought to explain long-hand why reprioritization might be needed here, given it
hasn't been mentioned yet.
E#4b)
Note that handling of omitted parameters is different
when processing an HTTP response
When what processes an HTTP response? The client? An intermediary? Both?
§4.1. Urgency
E#4c)
integer between 0 and 7, in
descending order of priority
The smaller
the value, the higher the precedence.
Suggest the latter is moved up 2 paras.
(BTW, what possessed anyone to define this in the most illogical way possible?
So that the highest numerical urgency means the lowest urgency?)
E#4d) Priority = Precedence = Urgency?
This section interchanges the words 'priority', 'precedence' and 'urgency'. Why
not just use urgency throughout? Otherwise you have to define that priority and
precedence mean exactly the same as urgency.
§4.3. Defining New Parameters
E#4e) Not new parameters for everything
Suggested replacement section heading: "Defining New Priority Parameters"
s/ When attempting to define new parameters,/
/ When attempting to define new priority parameters,/
§4.3.1. Registration
E#4f)
s/in Structured Fields Dictionary/
/in the Structured Fields Dictionary
§5. The Priority HTTP Header Field
E#5a)
s/carries priority parameters Section 4./
/carries priority parameters (Section 4)./
E#5b)
s/As is the ordinary case for HTTP caching [CACHING], a response with a
Priority header field might be cached /
/A response with a Priority header field might be cached [CACHING]/
§7. The PRIORITY_UPDATE Frame
E#7a)
s/which can can be bound by/
/which can can be bounded by/
(Oxford dictionary example '‘the ground was bounded by a main road on one side
and a meadow on the other’')
§8. Merging Client- and Server-Driven Parameters
E#8a)
First para: The 2nd example isn't a particularly good example of 'server knows
best'; it even admits that it's the visual-ness of the client that determines
the priority.
#8b) What is 'the logic being defined' meant to mean?
s/This is different from the logic being defined for the request header field,/
/This is different from the approach for the request header field,/
Perhaps?
§10. Server Scheduling
E#10a) No guidance is provided,... except for a page and a half
No guidance is provided about how this can or
should be done. ...
For these reasons, ... this document only provides some basic
recommendations for implementations.
(Contradictory.)
It seems like the rather over-negative caveats in the first 3 paras need to be
revisited now that the page and a half of recommendations (some normative) has
been added after them.
§10.1. Intermediaries with Multiple Backend Connections
E#10b)
s/inflight/in flight/
§11. Scheduling and the CONNECT Method
#E11a)
A
client that issues multiple CONNECT requests can set the incremental
parameter to true, servers that implement the recommendation in
Section 10 will schedule these fairly.
s/...true, servers.../
/...true. Servers.../
There are 3 recommendations in §10. Which one?
Given §10 starts with a load of caveats about how hard this stuff is to get
right, is it appropriate to assert with such certainty that scheduling will be
fair?
§12. Retransmission Scheduling
E#12a)
s/Section 6.2.4 of [QUIC-RECOVERY], also highlights/
/Section 6.2.4 of [QUIC-RECOVERY] also highlights/
§13.1. Coalescing Intermediaries
E#13a)
It is sometimes beneficial for the server running behind an
intermediary to obey to the value of the Priority header field.
This seems an odd sentence to have in this draft. I think it is meant to be in
the context of the previous sentence about coalesced requests with priorities
set by different clients. Needs rewriting, I think.
s/obey to/
/obey/
s/the Priority header field/
/each Priority header field/
s/as another signal in its prioritization decisions./
/as another input in its prioritization decisions./
§14. Why use an End-to-End Header Field?
E#14a)
s|Contrary to the prioritization scheme of HTTP/2|
|In contrast to the prioritization scheme of HTTP/2|
Contrary in this form has an implication that HTTP/2 was wrong. That might be
intended. But 'in contrast to' has less of a 'know-it-all' feel.
E#14b) Answering a different question
rather
than how relatively urgent each response is to others.
This seems to be a (weak) rationale for relative rather than absolute
priorities, whereas the section heading promises the rationale for an e2e
header. I suggest the whole first sentence of this 2nd para is deleted, because
the next sentence gives a sufficient rationale.
E#14c) Answering another different question
It should also be noted that the use of a header field carrying a
textual value makes the prioritization scheme extensible; see the
discussion below.
This is also not a rationale for an e2e header. Perhaps it belongs in the Intro?
Alternatively, the section title could be changed to "Rationale for Priority
Protocol Design" or something. Then it could give rationale for absolute
priority values and textual values as well as e2e header fields.
E14d) see what "discussion below"?
Perhaps this refers to the sentence in Security Considerations that refers to
[STRUCTURED FIELDS]? If so, it's hardly a "discussion".
§16. IANA Considerations
E#16a)
populate it with the
types defined in Section 4; see Section 4.3.1 for its associated
procedures.
I suspect IANA will prefer the exact text they should use to be written here.
Thanks for these, they were by and large great suggestions. See issue 1802 for how we tracked our response https://github.com/httpwg/http-extensions/issues/1802
Cheers,
Lucas
-- last-call mailing list last-call@xxxxxxxx https://www.ietf.org/mailman/listinfo/last-call