> It may > well be that having applications be more brittle would be an > acceptable cost for getting a viable multihoming approach > that address the route scalability problem. (All depends on > what "more brittle" really means.) But the only way to answer > such questions in a productive manner is to look pretty > closely at a complete architecture/solution together with > experience from real implementation/usage. I agree. For instance, the cited DNS problems often disrupt communication when there is a problem free IP path between points A and B because DNS relies on third parties to the packet forwarding path. But 3rd parties can also be used to make things less brittle. For instance if an application whose packet stream is being disrupted could call on 3rd parties to check if there are alternative trouble-free paths and then reroute the stream through a 3rd party proxy. If a strategy like this is built-into the lower level network API, then an application session could even survive massive network disruption as long as it was cyclic. I have in mind the way that Telebit modems used the PEP protocol to test and use the communication capability of each one of several channels. As long as there was at least one channel available and the periods of no-channel-availability were short enough, you could get end-to-end data transfer. On a phone line which was unusable for fax and in which the human voice was completely drowned out by static, you could get end-to-end UUCP email transfer. A lot of work related to this is being done by P2P folks these days, and I think there is value in defining a better network API that incorporates some of this work. --Michael Dillon _______________________________________________ Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf