Last comment first ... > We are getting there, but I would ask that you take the transport > hat off and look again from an infrastructure and packet transport > perspective. I don't view this as looking at it from a transport vs. infrastructure perspective. And, I am not disagreeing with your perspective. My take is that the nub of what you're saying is that there are cases where we know something about the network. And, that something let's us design a more savvy loss detection and response scheme. E.g., because the link / path is known to be short and so using an initial RTO of 1sec is too long. E.g., because the cause of loss is known or can be safely assumed to not be congestion. And, I think that view is both correct and reasonable. However, ... (0) I do not view that view as inconsistent with this document at all. (1) Because there are cases where we know more doesn't make a set of default requirements for the general case when we don't understand the path any less valid. (2) The document explicitly says alternates are fine modulo the usual consensus. I.e., in cases where we have more information we can do things differently. And, the cost of that is no different than the cost today (i.e., specifying it and gaining consensus). So, my view is that this all boils down to making it clear that this is not somehow THE (best) way to do time-based loss detection for all cases. Rather, following the guidelines with result in a safe-for-general-use loss detector. > In the general case, delay across a > network path depends not only on distance, but also a number of > variable components such as the route and the level of buffering in > intermediate devices. > > Its is more the contending/conflicting traffic rather than the > buffering, or perhaps the time spent in queues, but “buffering” is > a link a transport colloquial term. Per this and Gorry's note, I will tweak this to use queuing, as that is what I meant. > Perhaps we could include a clearer disclaimer regarding the > non-best-effort-internet-end-to-end traffic? > You have some text on this down in section 2 but it is a bit buried. OK, let me see if I can foreshadow this a bit more and/or pull some from section 2 to earlier. > An exception to this rule is if an IETF standardized mechanism > determines that a particular loss is due to a non-congestion > event (e.g., packet corruption). > > That is a bit heavy. It should be “a protocol” there than an IETF > standardarized mechanism. The IETF does not have a monopoly on > pre-blessing protocols before they are deployed. > [...] > > In some cases you cannot tell the cause, but it is more important > to ignore the loss. OAM being a particularly good example. First, I don't think I can readily change this without going against the consensus the document has already gathered. (I.e., this is not about the intro, framing, context, etc. bits, but the actual meat of the technical stuff.) Second, you are right that the IETF does not have a monopoly, but that doesn't make the statement in the document the wrong thing to say. Third, I doubt I should change it. The problem here from the standpoint of a set of default guidelines is that we'd like a mechanism to determine the cause of loss to **actually work** before it's OK to avoid a congestion control response. If a standardized mechanism is used then we have some confidence that the mechanism has been vetted as reasonable. If the mechanism is not standardized then we have no idea if it actually works or if it is some ill-conceived scheme that is badly broken. In this latter case, I don't think we want to bless this approach as OK within the default. It may be OK, but it should get some consensus that it is OK. allman
Attachment:
signature.asc
Description: OpenPGP digital signature
-- last-call mailing list last-call@xxxxxxxx https://www.ietf.org/mailman/listinfo/last-call