Re: e2e

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Aug 20, 2007, at 3:16 PM, Hallam-Baker, Phillip wrote:
Which is what prompted the original point I made in the plenary: when someone is using the end to end principle to slap down some engineering proposal they don't like I would at least like them to appear to have read the paper they are quoting as holy scripture.

hear, hear...

The Internet has changed and we need to recognize that there may be different circumstances that lead to the desirability of a different approach. My belief is that the most appropriate architecture for today's needs is a synthesis which recognizes both the need for the network core to be application neutral and the need for individual member networks of the Internet to exercise control over their own networks, both to protect the assets they place on their network and to protect the Internet from abuse originating from or relayed through their network.

The funny thing is that I'm not convinced that this is a change.

As I have said in this and other fora, there is an argument for functional complexity in the network, and routing is its poster child. Routing could be done from the edge and the network be unaware of how to route across it. It would be very difficult for a service provider to make any guarantees to his customers in such a case, but the obvious algorithms one might use are well known. We don't do that - we do indeed treat routing as an acceptable form of complexity that leaves significant state in the network - specifically because we find commensurate value in the functionality provided.

RFC 61 (August 1970), quoting a paper by L. Roberts and B. Wessler in the same year, observes that

   "A resource sharing computer network is defined to be a set of
   autonomous, independent computer systems, interconnected so as to
   permit each computer system to utilize all of the resources of each
   other computer system.  That is, a program running in one computer
   system should be able to call on the resources of the other computer
   systems much as it would normally call a subroutine."

Here, the author is clearly thinking of processes on computers. But just the following year RFC 164 reports that the Air Force wanted to build an autonomous network, in 1979 the SRI NIC commented (RFC 756) on the need for autonomous name servers in various networks (a line of reasoning that led to the present DNS a few years later), and in 1983 RFC 820 reported the assignment of the first Autonomous System Number, used by RFC 827's EGP protocol for separating autonomous routing systems at the edge from the BBN-operated ARPANET.

From my perspective, it has been at least 25 years, if not 37, since we recognized "both the need for the network core to be application neutral and the need for individual member networks of the Internet to exercise control over their own networks". What we have struggled with since is the recognition that while the core needs to be application neutral, it isn't necessarily service-neutral; the development of two QoS architectures, the traffic engineering in MPLS, and the amount of bickering that has gone around declaring such things to be inappropriate - ("just get enough bandwidth and all that goes away") - has been all about the assertion that the core should be application-neutral vs the assertion that it needs to be able to offer specific services.

I read in the paper this morning that the "Web" is running out of bandwidth. The article quoted Metcalf and others and their predictions of the death of the Internet, and noted that Bob finally blenderized his columns in front of an audience. My company figured in the article. So did the advent of video content, which requires significantly more bandwidth and significantly lower loss rates than more traditional voice services or TCP-based applications. Had the author had a clue, s/he would have said something about the Internet once again changing its fundamental service, from enabling terminal service, to moving files using FTP and Network News, to moving smaller files using SMTP and HTTP, to basic audio services, to peer- to-peer file sharing, and now to video. Not that the old goes away right away, but we add on and the old becomes less important.

What we need to do is figure out how to let the intelligent network core work cooperatively with the intelligent edge to let it do intelligent things. Right now, the core and the edge are ships in the night, passing and occasionally bumping into each other. No, we don't want unnecessary intelligence in the core. But, as with routing, I will argue that there is some that is constructive.

_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]