The internet architecture

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: RE: [BEHAVE] Lack of need for 66nat : Long termimpact to applicationdevelopers
[Behave dropped, this is an architectural level discussion. If we don't want this on ietf we should start a different forum]
 
I don't think that this claim is right: "Promoters of NAT, particularly vendors, seem to have a two-valued
view of the network in which inside is good and outside is evil."
 
Lets go back to the original concept of an inter-network as something that is different from a network. Here I think you will find that the writings of Clark, Postel, Crocker and co. make a clear distinction. A network is under a single administrative control while an internetwork is not, there is no one party that you can rely on to synchronize major changes. That is in fact the reason that IPv4-to-6 is a really hard problem.
 
So the distinction here is not between 'good and evil', it is between that which I am responsible for and that which someone else is responsible for. And that is why I expect that there will always have to be some form of perimeter control device in future network architectures.
 
Now lets turn to the issue that I raised in the plenary: encapsulation.
 
Encapsulation is an important principle in object oriented programming precisely because having rules about what is allowed to depend on what makes it much easier to innovate in useful ways in the future. Discipline is not incompatible with creativity.
 
Today the Internet architecture looks a bit like a program written in Cobol. The program works, it performs an important function but it is not written in the way that we would write it today knowing what we know today. And the reason is much the same: Cobol was research, the Internet was research. The objective of research is to know more at the end than you did at the beginning.
 
In particular the biggest problem we fact in the Internet is precisely the fact that it is very hard to change and to innovate. The reason people turned to object oriented programming was that encapsulation led to programs that were easier for others to maintain. This is also the reason that we use the single inheritance model of Java and C# rather than the multiple inheritance model of C++ or LISP.
 
What I am looking for here is an encapsulation that allows the application programmer to rely on a small set of simple assumptions and be able to be maximally confident that the result will be compatible with the network of the future.
 
That is not stiffling creativity or the 'generative' power of the Internet. On the contrary, every standards process involves removing choices from the programmer. We decide on particular RFC822 headers to use, we decide on particular semantics, above all we decide on one way to do something not five.
 
The question here then, is not whether an assumption is likely to be true or not, it is whether it is necessary.
 
Over the next few years application programs are going to have to cope with a minimum of four separate transport protocols even if there is no NAT whatsoever: IPv4, IPv6, IPv46, IPv64. Add NAT in and we have four different variations of IPv4 alone: IPv4, IPv44-4, IPv44-4-44, IPv4-44. And that is before we start thinking about multiple layers of NAT.
 
To me this argues that the only possible credible position has to be that the application protocol makes absolutely no assumptions whatsoever on the basis of the IP layer address. In fact it would be best for the O/S to provide an API which hides the details of the IP address from the application entirely in the same way that the ethernet MAC address is hidden.
 
What we absolutely cannot afford is allowing people to insert 'assumptions' into the application layer design that simply are not true for the purpose of mandating support for some particular network architecture. This is tinkerbell logic: if we wish hard enough it will make true something that is not.
 
In the original Inter-network design the role of Internet Protocol was only mandated as the protocol for use between networks. The idea that IP has to be end to end is a subsequent assumption that has no grounding in the actual history.
 
 
What I want to see in an Internet Architecture is a set of assumptions that can be reified as an API that can then remain stable for a century or so. For example:
 
A well formed Internet application MUST:
 
* Identify a service endpoint by means of a URI
* Resolves a service endpoint to a server and port address through the Domain Name System [1]
 
A well formed Internet application MUST NOT:
 
* Employ IP addresses and/or port numbers as inbound signalling
 
The contract here is that we write down the shortest possible list and then we try to keep those promises true for the longest possible time, like a century.
 
 
[1] Note that the Domain Name System need not necessarily entail the DNS protocol.as described in RFC 2181, it is the consistency of the mapping, not the protocol instantiation that is essential. Not that I consider it likely that there would be a change, it is not necessary to assume that there will never be a change.
 
 
Not only is this proposal more robust, it allows almost infinitely greater generative power. In fact I can quantify it. With the port numbering scheme we are limited to 2^16=65536 different protocols. That is not very many in a world with a billion Internet users. Using DNS/SRV for service endpoint discovery on the other hand allows for 36^12 = 4.7E18 different combinations, 72 trillion times more than there are port numbers. And if we ever run out we can work on a new scheme and burn another DNS RR.
 
It is also, I would suggest, supported by a considerably greater consensus in the application area than the arguments being made that NAT is evil. From the apps area point of view we would like programs to work using the networks we have and we really do not believe that IPv6 is going to save us.
 
Going NAT free is not going to be an incentive to move to IPv6 because there is no possible transition plan that is not going to involve IPv4-6 translation and thus the problem is going to get worse for the next ten years if we are foolish enough to write any more application protocols that rely on the IP address being constant along the network path.
 
 
Publishing a simple architectural contract of this form would greatly ease the IPv4-6 transition and help clarify questions such as when/where we should employ UDP, TCP and Multicast.
 
My view is that UDP should only be used for the lowest level facilities, we need it for DNS and we appear to need it for time. After that I think that any other use looks more like a different protocol 'lossy TCP'.
 
The principal problem with using a streaming layer other than TCP or using multicast is the lack of support which in turn is a defect of our service endpoint discovery strategy. If we could use an extended version of SRV which gave us information about the range of transport protocols on offer it would be possible for endpoints to negotiate better choices. So the consumer whose local ISP supports 'lossy TCP' can use a videophone client application that makes an intelligent choice of transport protocol.
 
The same is true of several other features that were discussed in the IAB presentation. So applications are using faulty heuristics to make guesses as to what the network state is? Well provide them with a principled means of publishing and obtaining the same information. For example, ISPs could publish network performance stats in the reverse DNS.
 
 


From: ietf-bounces@xxxxxxxx on behalf of michael.dillon@xxxxxx
Sent: Wed 11/26/2008 4:27 AM
To: ietf@xxxxxxxx; iab@xxxxxxxx; iesg@xxxxxxxx
Subject: RE: [BEHAVE] Lack of need for 66nat : Long termimpact to applicationdevelopers

> Yeah, but we're trying to get rid of that stuff, or at least
> considerably reduce the cost and complexity, because (among other
> things) it presents a huge barrier to adoption of new multiparty apps.

Promoters of NAT, particularly vendors, seem to have a two-valued
view of the network in which inside is good and outside is evil.
But network operators, who sit on the outside of the NAT,
do not share that view. In fact, we see a future in which
cloud computing centers within the network infrastructure
will lead to a wide variety of new multiparty applications.
In many cases the network operator also takes management
responsibility for gateway devices, so the idea of evil on
the outside is even further fetched.

That said, if there is to be some form of NAT66 because there
are real requirements elsewhere, it would be preferable if
the defined default state of this NAT66 was to *NOT* translate
addresses. This is not crazy if you see it in the context
of a NAT device which includes stateful firewalling.

I suggest that if part of the problem is an excess of
pressure from vendors, then the IETF could resolve this
by actively seeking greater input from other stakeholders
such as network operators.

--Michael Dillon
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf

_______________________________________________

Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]