Perhaps this proposal really requires another working group or something. I seem to remember someone making a similar proposal a several years ago on this list and it didn't seem to get a good reception then. For what it is worth, though, I really do think it is an idea whose time has come.
It seems everyone has espoused the idea that the application *must* talk directly to the network transport in order to be efficient and/or resilient, but given the capabilities IPv6 is supposed to have, I wonder how much of a good idea it is to cling to that concept.
It is not worthwhile to suggest that the direct application/transport interface be eliminated, but it should be possible for an application to specify what kind of service it wants from the (transport, network) combination and something is there to handle that where the requested service is different from that natively provided by the (transport,network) combination. Problem is that right now, every time an application needs such a service, the application programmer must code his own provisions for it, and none of the propositions I have seen discussed so far seems to deal with that.
Keith Moore's statement is quite correct, viz:
Application developers need reasonably stable and reliable endpoint identifiers. At present, the IP address is the closest available approximation to such an identifier.
Two questions arise out of that though, at least as far as Keith Moore's statement is concerned. The first one is how stable is stable? That is, how long is that identifier supposed to remain stable. And the second question is what should be the context of that identifier's validity?
Also from Keith Moore:
Disagree. The layer needs to be between the current network and transport layers. If the layer were between transport and applications, many of the services provided by transport would end up having to be re-implemented in higher layers.
And
It would also force routing decisions to be made by layers much more distant from the network layer that maintains reachability and state information than the transport layer.
Does Mr. Moore care to explain a little why he believes these two statements? As far as I can tell it needs not necessarily be so.
Yours sincerely, Robert Honore.
Tony Hain wrote:
In the ongoing saga about topology reality vs. application perception of stability, it occurs to me we are not working on the right problem. In short we have established a sacred invariant in the application / transport interface, and the demands on either side of that interface are the root of conflict.
Mobile IP, and the multi6 DHT work are attempts to mitigate it through slight of hand at the IP layer, while SCTP attempts to mask the topology reality in the transport layer. (These are probably not the only examples, but they are the ones that come to mind this morning.) Yet none of these really do the job in all cases.
We all agree that applications should not be aware of topology. At the same time, application developers insist on the right to pass around incomplete topology information. There are arguments that the IP address is overloaded as an endpoint name. I don't particularly buy that, because the real endpoint is either a protocol, or a specific port of a protocol. From one perspective, when applications use addresses in referrals, they are specifying that routing through a particular point in the topology will reach the desired endpoint named 'transport/port-id'. But that is a semantics and perception discussion which doesn't help reach the goal.
In any case, what applications need is a stable reference to other
participants in the app. Yet we know that current and projected reality says
that topology is not stable, or consistently reachable. This could be due to
technology issues of the underlying links, or simple policy. Either way, the
network as seen by the transport layer is not, and will never be stable or
consistently reachable from everywhere. Given that, the direct interface
with the transport layer creates a failure mode for an application expecting
stability.
Where this leads us is to the sacred invariant, and the need to solve the problem in the right place. I suggest it is time for the IETF to realize that the protocols are no longer being used in the network of 1985, so it is time to insert a formal stabilization layer between application & transport.
Such a layer would be responsible for managing intermittent connectivity
states and varying attachment points of the transport layers below.
Applications that interact with this layer would be insulated from the
inconsistencies being experienced by the transport layer. It would also be
reasonable for this layer to manage multiple simultaneous transport
interactions so that the application perceived a single data path to its
peer. With appropriate trust between the stack and the network policy, this
would even simplify the process of QoS markings for parallel related data
sets.
The next place that leads us is to a name space. At this point I see no reason to avoid using the FQDN structure, but acknowledge that the DNS as currently deployed is not even close to being up to the task. The protocols are not so much the issue as the deployment and operation model that is focused on a limited number of nodes operated by a small community of guru's, where the expectation is that any changes occur on the order of several days or longer. What we ultimately need in a name service to support the suggested layer is the capability for every consumer device with electrons in it to automatically and dynamically register its current attachment information with rapid (that doesn't mean sub-second, more along the lines of a cell phone that powers up away from its home) global convergence. As there are multiple trust boundary issues involved (because not every device will be subscribed to a service), making this scale will require pushing the database distribution out to smaller pockets. Making it reliable for Joe-sixpack will probably require that part of the infrastructure exists on his side of the interconnect & trust boundary from any other networks. Automating the attachment of a massive number of small dataset servers will probably require something with better scaling characteristics than the current DNSsec deployment model.
Since many networks have usage policies established around the sacred invariant, there will need to be some recommendations on how to apply those policies to this new layer. We could even consider a protocol between this layer and a policy entity that would aggregate applications into a policy consistent with the what the network can deliver for various transport protocols. This distribution of policy awareness would have much better scaling characteristics than per app signaling, as well as the ability to locally group unrelated transports that any given app might be using.
Bottom line is that we are spending lots of time and effort trying to force fit solutions into spaces where they don't work well and end up creating other problems. We are doing this simply to maintain the perception of stability up to the application, over a sacred interface to a very unstable network. Stability to the application is required, but forcing applications to have direct interaction with the transport layer is not. Yes we should continue to allow that direct interaction, but we should be clear that there are no guarantees of stability on that interface. If the application does not want to take care of connections coming and going to the same device at potentially different places, it needs to have an intermediate stabilization layer available.
Tony
-------------------------------------------------------------------- IETF IPng Working Group Mailing List IPng Home Page: http://playground.sun.com/ipng FTP archive: ftp://playground.sun.com/pub/ipng Direct all administrative requests to majordomo@sunroof.eng.sun.com --------------------------------------------------------------------