> From: "Eggert, Lars" <lars@xxxxxxxxxx> > I would like the document to specify at the very least a circuit > breaker mechanism, that stops the tunneled traffic if severe packet > loss is detected along the path. I think people are looking at this from the wrong perspective, focusing in on UDP and what its specs say, and not on the larger engineering picture. Envision the following 4 (or more) scenarios for one Border Tunneling Routing (BTR), BTR A, to send packets to another BTR, BTR B, on the path from ultimate source S (somewhere before BTR A) to destination D (somewhere after BTR B). - Plain IP - Some existing encapsulation like GRE - A new, custom encapsulation - Encapsulation using UDP What you seem to be claiming is that in case 4 we need to have congestion detection and response at the intermediate forwarding node BTR A - but it would not be required in cases 1-3? This makes no sense. Even better, suppose that BTR A implements _both_ one of the first three, _and_ UDP encapsulation. If its response to UDP congestion on the path to BTR B is to.... switch to a _different_ encapsulation for traffic to that intermediate forwarding node, one for which it's not required to detect and respond to congestion, did that really help? Similarly, if people doing tunnels ditched UDP in favor of some other encapsulation (assuming they could find something that would get through as many filters as UDP does, would have the same load-spreading properties that UDP does, etc, etc - or maybe not, if that's the price they have to pay for being free of the grief they are getting because they are using UDP) - would that do anything at all for any potential congestion from their traffic? No, it would still be there, obviously. Look, the current architectural model of the Internet for dealing with congestion is that the _application endpoints_ have to notice it, and slow down. Intermediate forwarding nodes don't have any particular responsibility other than to drop packets if they have too many. You are quite right that if we take some application that doesn't detect and respond to congestion (perhaps because it was written for a local environment, and some bright spark is tunnelling that L2 protocol over the Internet), that can cause problems - but that's because we are violating the Internet's architectural assumption on how/who/where congestion control is done. I don't have any particularly brilliant suggestions on how to respond to situations in which applications don't detect and respond to congestion. Architecturally, if we are to keep to the existing congestion control scheme (endpoints are responsible), the responsibilty has to go back to the ultimate source of the traffic somehow... But saying that _intermediate forwarding nodes_ have to detect down-stream congestion, and respond, represents a fundamental change to the Internet's architecture for congestion control. Noel