Hi.
Thanks for the responses.
Some comments below. I have removed the stuff that is agreed.
One very high level point occurred to me after I had done the review and
relates to the flow control mechanism. HTTP/2 is adding additional flow
control algorithms to a channel that already contains the TCP flow
control algorithms, and, if the aqm wg has its way, there will be a fair
number of routers and hosts with yet more flow control algorithms (not
necessarily all the same) in them. My dim remembrances of control
engineering tell me that linking up all these nested and/or serialized
control mechanisms with feedback processes results in a situation needs
to be checked for stability. Since the document doesn't specify the
actual algorithms used, this is something that might have to be checked
out for each implementation. I have to say that we may be giving
ourselves a combinatorial nightmare or it may not be a problem at all -
and it isn't just HTTP/2's problem although your extra loop adds
complexity. It's something we will need to test.
Regards,
Elwyn
On 20/01/2015 07:17, Martin Thomson wrote:
Thanks for the thorough review Elwyn.
I've made changes in the form of a proposed change set against the
editor's copy.
https://github.com/http2/http2-spec/pull/692
You can review the changes at that link. Note that some of the typos
you caught were already fixed, or will be fixed in other proposed
change sets.
As is customary, I will await a sanity check and AD/shepherd approval
to merge the changes into the master copy. I expect that to occur
after the IESG discuss approval.
Right - as expected.
Responses inline.
On 19 January 2015 at 02:37, Elwyn Davies <elwynd@xxxxxxxxxxxxxx> wrote:
Summary:
Almost ready. A well written document with just a few nits really. I am
slightly surprised (having not been following httpbis in detail) that HTTP/2
is so tightly wedded to TCP - this is doubtless pragmatic but it adds to the
ossification of the Internet and makes me slightly suspicious that it is an
attempt to really confirm HTTP/2 as the waist of the neo-Internet. Can't we
ever use any other transport?
I think that - overall - the desire for the timely replacement of
HTTP/1.1 was stronger than the desire to attain perfection.
And rather than ossifying, the general view is that ALPN clears a path
to more changes in the future.
If you want to talk about the waist of the neo-Internet, I think
identifying HTTP more generally is appropriate; we're only really
upgrading a small part of it. And there are ongoing plans to continue
to upgrade the entire protocol. But our relevance is only defined by
what we ship, and this is a significant improvement that isn't worth
holding back for years while we ponder more fundamental changes.
A couple of minor issues: (1) I am not sure
that I see the (security) value of the padding mechanisms specified, but I
am not an expert so I will defer to the security experts, and (2) the
extension method for SETTINGS seems to be flawed. Apologies for the
slightly delayed review.
I'll address those below. And don't concern yourself with the delay,
it's not a small document.
Major Issues:
s3, para 1: Just checking: HTTP/2 is defined to run over TCP connections
only. I take it that this is something that the WG has formally decided
upon. Is there something that stops HTTP/2 running over any other reliable
byte stream protocol with in-order delivery? Could there be a more general
statement?
Such a statement was debated at some length. The current view (unless
I'm mistaken) was to recognize that HTTP/2 uses TCP. More definitive
statements were avoided to ensure that if a future document chose to
define how HTTP/2 used some other transport that wouldn't be too
disruptive.
And who said it needed to have in-order delivery?
Referring to the cited section, the accepted view on ALPN tokens is
that they identify a particular application protocol profile; a new
protocol that layered something that was partially or substantially
like HTTP/2 over another transport would have a new identifier.
I am not worried about changes to HTTP/2 - that is reasonably well
catered for by ALPN and the cleartext h2xxx version.
It's the WG's choice to limit transports to TCP : so be it. I'd have
liked to allow alternatives such as SCTP but that is a personal view.
Minor Issues:
s4.3, next to last para; s5, bullet #1:
(Just a bit more than a nit)
s4.3 says:
Header blocks MUST be
transmitted as a contiguous sequence of frames, with no interleaved
frames of any other type or from any other stream.
s5 says:
o A single HTTP/2 connection can contain multiple concurrently open
streams, with either endpoint interleaving frames from multiple
streams.
If s4.3 is correct (which multiple repetitions elsewhere indicate that it
is) then the bullet in s5 needs to reflect the constraint in s4.3 as they
are currently inconsistent.
Section 5 is summary information. I don't think that further
qualification is needed at this point; as you note, the constraint is
repeated frequently.
How about adding "subject to limited constraints" to the Section 5 bullet.
I am not quite sure whether the last para of s4.3 implies that the client
must wait until
it has the whole header block before trying to decompress or whether
decompression
might happen as fragments are received - please clarify this in the text.
That's a clarification that belongs in the header compression
document. It appears in Section 3.1 of that document.
It would be worth an additional reference to [COMPRESS] where the
de-/en-coding context
is introduced to emphasize that the details of the compression process
are included there.
s6.1 and s6.2: I am dubious about the value of the padding story in DATA and
HEADERS frames, even given the caveats in s10.7. An attacker can use the
header to deduce the padding section. It seems a bit odd to say that you
can add arbitrary padding and then insist it is all zeroes. However, I am
not an expert in this area and will defer to security experts.
Hard as it is to get right, padding here is important in some use
cases. We've had a lot of separate requests for the feature.
The all zeros thing is safe if your cipher provides IND-CCA, and that
is table-stakes.
You are right though about the arbitrary thing, I've removed the word
"arbitrary".
I have done a bit of homework on this including having a look at
draft-pironti-tls-length-hiding-01,
so I hope I understand this a little better now. Practically, I think
it would be helpful to move the last paragraph of s6.1 and combine it
with the second para that introduces the idea of padding, making the
security implications more upfront. I think I am correct in saying that
padding would be pointless if not using TLS: I think it would be worth
saying this in s6.1.
I think an (informative) reference to draft-pironti-tls-length-hiding-01
in s10.7 would be useful - I think it covers the 'redundant padding
could be counterproductive' story in the 2nd sentence and also the
statements in the last para of s10.7.
s6.5.3: If an endpoint sends a SETTINGS value that the receiving endpoint
doesn't understand (for an extension, say), the receiver will ignore it but
still send an ACK. This leaves the sending endpoint unaware that the other
endpoint didn't understand it. I suspect that this makes the extension
mechanism for settings effectively useless. I note that s6.5 says that
implementations MUST support the values defined in the draft so that the ACK
mechanism works fine for the base system, but any extension of SETTINGS
cannot be used to limit the behaviour of the receiving endpoint because the
sender can't rely on the receiver actioning the limitation; the only useful
things might be to expand a limit that the sender would otherwise have to
conform to. It would be possible to alter the spec so that the receiver can
send back any values it didn't understand with a frame with ACK set - not
exactly negotiation but just rejection.
I don't think this is a problem. Most proposals that have been
floated use a reciprocal SETTINGS frame to signal that the value was
understood. The only failure mode there occurs when there is a
collision between two extensions that results in two conflicting views
of the setting semantics. Of course, that is equally possible with a
rejection-based scheme.
Sorry, but I don't understand your logic here. Let me try again:
If (say) the client asks for a SETTINGS value from some future extension
that the server doesn't understand, my interpretation is that the server
will ignore the value it doesn't understand while doing the ones that it
does understand and say 'ACK' to the client. I assume that the client is
then entitled to believe that the server has interpreted its requested
SETTINGS as expected and the client can do whatever the extension was
intended to allow. Depending on what the ignored SETTINGS are, this may
have bad consequences during later operations.
Accordingly, I think that the SETTINGS extension needs some way to tell
the requesting endpoint that the responding endpoint couldn't action
(part of) its requests. The requester can then act accordingly,
terminating the session or avoiding actions that exploit whatever
setting couldn't be actioned.
Editorial/Nits:
Abstract:
allowing multiple
concurrent messages on the same connection.
Technically the *messages* are not on the connection concurrently. How
about:
allowing a server to simultaneously process multiple requests submitted
via a single
connection with arbitrary ordering of responses.
Technically, it is possible to have multiple messages concurrently in
progress, is only frames that don't interleave.
Would s/messages/exchanges/ work for you? I value brevity over
absolute correctness for this sort of language.
Yes. the new version looked good.
s2, last para:
allowing many requests to be compressed into one TCP packet.
At this point in the document, nothing has been said about TCP.
Better:
allowing many requests to be compressed into one transport connection
packet.
Or, I could simply s/TCP//
Yup.
s2.2: I think definitions or references or deprecations for message/message
body/message payload/message headers/payload data and entity/entity
headers/entity body are needed. With HTTP/2 the distinction between message
body and entity body is no longer needed I think. It appears that message
body is not currently used but message payload, payload data and entity body
are. I think that it might be useful to stick to message payload in the
body of the draft and add a note in terminology to indicate that message
payload covers both message body and entity body in HTTP/1.x and stress that
there is only one encoding.
I've referenced the RFC7230 definitions of message body and payload
body, which have very specific meaning there. There were a couple of
places that might have been ambiguous that I've corrected.
I checked the changes and I think this is now all cleared up. Thanks.
s3.2.1, para 4:
OLD:
production for "token68"
NEW:
production for "token68", allowing all the characters in a base64url string,
Hmm.
<<snip>>
s3.5, paras 2-5: It would be worth explaining that what is being done here
is to send what would be a method request for the PRI method which will not
be recognized by well-implemented HTTP/1.x servers. The PRI method is then
reserved for HTTP/1.1 so that it can't be accidentally implemented later
(see Section 11.6).
We've discussed providing explanatory text of this nature in the
working group on a few occasions, but avoided it. In this case, the
main beneficiaries of that information are people who aren't reading
and implementing this specification, so we decided against more
verbiage to that effect.
Well.. with my gen-art hat firmly in place, part of the object of RFCs
is that they shouldn't be totally opaque to those who are reading them,
even those who aren't actually experts implementing the protocol. My
suggestion for an explanation would help those.. I didn't immediately
twig why the string was structured as it is.
s4.1, Figure 1 (and other figures): The frame header layout doesn't match
the usual conventions for protocol component specifications where each 32
bit 'word' is filled out. I'm not sure how strictly we would want to adhere
to this convention... consult an AD.
<<snip>>
s4.1, Flags: For the avoidance of doubt it would be good to label the flag
bits with the numbers that are used later in the document.
Not sure what you mean here. Adding "|7 Flags(7) 0|" to the diagram
seems more likely to be confusing to me.
Add a separate box with the flag identifiers would be one way. It
occurs to me that you have actually got a fixed allocation of flags to
bits. You could list the bit names in s4.1 in practice, but its
probably too late for this.
s4.1, Stream Identifier: s5.1 and s5.1.1 say the identifiers are integers -
this should be reflected here. Also consistency between s4.1 and s5.1.1
should have the reserved id as either 0 or 0x0 (OK, this is really picky!)
Done.
s5, bullets #2 and #5:
o Streams can be established and used unilaterally or shared by
either the client or server.
o Streams are identified by an integer. Stream identifiers are
assigned to streams by the endpoint initiating the stream.
[Is there a risk of a race condition in which one end allocates a stream id
and sends using it and the other end allocates the same id for a different
purpose while the first frame is in transit? Maybe clients should always
use odd numbered streams and servers even numbered .. or is this a use for
the reserved bit to be used by servers?]
OK.. I correctly identified the possibility of race conditions.. Please add
a pointer to s5.1.1 which tells you to do what I just proposed. Another
pointer in s4.1 definition of stream id would also help.
5.1.1 is as early as I can put that definition. The introductory text
is just an overview; I don't think that it needs to be complete,
meticulously correct, or fully referenced. It merely needs to
establish the purpose of Section 5: which is to more precisely
describe how the enumerated properties are realized.
Section 4.1 already references 5.1.1. I consider that sufficient.
Right.. my mistake.
s5.1:
OLD:
half closed (local):
A stream that is in the "half closed (local)" state cannot be used
for sending frames. Only WINDOW_UPDATE, PRIORITY and RST_STREAM
frames can be sent in this state.
NEW:
half closed (local):
A stream that is in the "half closed (local)" state cannot be used
for sending frames with the exceptions of WINDOW_UPDATE,
PRIORITY and RST_STREAM frames.
Thanks.
Also: Would it be worth saying that any type of frame can be received in
this state?
Good catch. Added:
"An endpoint can receive any type of frame in this state. Providing
flow control credit using WINDOW_UPDATE frames is necessary to
continue receiving flow controlled frames."
s5.1:
half closed (remote):
A stream that is "half closed (remote)" is no longer being used by
the peer to send frames. In this state, an endpoint is no longer
obligated to maintain a receiver flow control window if it
performs flow control.
Needs a pointer to the section where the flow control window is discussed.
I disagree. Though your quoting made me notice a mistake here. The
"if" clause needs to go (it's a vestige of a previous iteration of the
protocol that didn't make flow control mandatory).
Fine.
<<snip>>
s5.1, next to last para; s5.5, para 3, s6.2, END_HEADERS, para 2:
s5.1 says:
OLD:
Frame of unknown types are ignored.
s5.5 and s6.2 qualify this by saying that frames of unknown types inside
HEADER frame sequences have to be treated as connection errors. The two
sections need to be synced up. Maybe:
NEW:
Frames of unknown types are ignored except when they occur in HEADER frame
sequences when they have to be treated as a connection error of type
PROTOCOL_ERROR (see Sections 5.5 and 6.2).
But then you missed PUSH_PROMISE. The CONTINUATION business is
important, but not important enough to turn a clear and concise
statement into a paragraph. I have little concern that the feature
will be accidentally missed.
Perhaps not but I spent some time worrying about the inconsistency so
other may..
can we compromise on:
NEW:
Frames of unknown types are ignored except when they occur in CONTINUATION
frame sequences (see Sections 5.5, 6.2 and 6.6).
s5.1.1, last para/s3.4, last para, s9.1, para 3: Having to open a new
connection when stream ids run out interacts with the point made in s3.4
that there is no guarantee that the new connection will support HTTP/2. It
might be worth pointing this out in s5.1.1 and s9.1 - it means the client
has to be able to switch back to HTTP/1.1 in the very rare case that this
happens. Thought: Would it be useful for a server to have a SETTINGS flag
that guarantees that it would do HTTP/2 on all subsequent connections?
Yes, and the fallback isn't necessarily pretty. At least not for
performance. I think that the view is that this is a low risk
scenario and not worthy of special attention.
Hmm.. On the other hand, in the rare case that it does happen it might
bite the client nastily if not catered for. I think it is worth a short
sentence in s5.1.1 at least.
As for the setting, a promise like that would be hard to keep, since
to some extent this is out of the control of the server. There is
always a risk that a different intermediary could be present next
time.
True. Not such a bright idea!
s5.3.1, para 1: To be absolutely explicit: s/on another stream/on exactly
one other stream/
Being too precise here risks misinterpretation in the opposite
direction: since there is a dependency graph, a stream actually
transitively depends on 0..n other streams. I don't think that it's
worth risking the interpretation that - because there is a dependency
on exactly one stream - the tree is at most one layer deep.
Fair point. Forget this then.
s6.2, END_STREAM flag, 2nd para: s/A HEADERS frame carries/A HEADERS frame
can carry/
It always carries the flag, it is whether it is set that matters.
In fact, isn't the first sentence of s6.2, para 2 redundant? The
previous para just said it signals stream end. I think you can delete
it completely.
<<snip>>
s8.1.3: For clarity, the convention "+ END_HEADERS"/"- END_HEADERS" should
be explained.
The text includes an explanation of the presence of flags. Since a
great deal of inference is required to make sense of other aspects of
the examples, I think that it's OK as is, and avoiding more paragraphs
is nice too.
OK
s8.1.3, last example: It might be worth noting that there could be
significant time between the two responses being sent.
I don't think that there is much value to that here.
.. and I do, but ultimately it's not up to me!
<<snip>>