RE: A follow up question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John C Klensin wrote:
> I want to leave most of that discussion for some ARIN-related 
> list, as David Conrad suggested.  

I was certainly not trying to pick on ARIN, because there is wide
perception that they are fairly liberal in their IPv4 policy. I agree
that allocation policy discussions belong elsewhere.

> But, for the record, I was 
> trying to dodge the question of whether address space 
> exhaustion, by itself, was  "strong enough" or "sufficient" 
> reason to get IPv6 deployed and, especially, to avoid the 
> rathole-populating discussion about alternatives.  Since I am on 
> record as believing that, by some reasonable measures, we have 
> already run out of address space and IPv6 is not yet widely 
> deployed, I would suggest that we have a solid example that it 
> has not [yet?] proven to be sufficient.

Yes it is not widely deployed yet, but that does not mean lack of IPv4
space is not a sufficient reason. In several active deployment cases I
am aware of, going as fast as they can, it takes 3-5 years to get to the
point of delivering a stable commercial grade service. It should not be
surprising that we don't see wide scale deployment yet, since the
realization that we are effectively out of IPv4 space has only hit home
recently. 

> ...
> I strongly suspect that we are using words in ways that results 
> in miscommunication, because I think there is a third 
> alternative.  That alternative may fall into the category you 
> describe as "passing around name objects", or maybe it doesn't. 
> >From my perspective --which is definitely the applications end
> of the stack looking down-- the problem is all about data 
> abstraction.  I'm going to try to switch terminology here in the 
> hope that it will help clarify things.  Suppose the application 
> gets a well-constructed handle, and passes that handle "around" 
> --either to other applications or to different interfaces to the 
> stack/network.  I don't see the application as dealing in 
> topology information by doing that, even if the handle 
> ultimately identifies information that is deeply 
> topology-sensitive and/or if the process that produces the 
> handle uses topological information, or probes the network to 
> develop such information.  The important thing from that 
> perspective is that the application is dealing with the handle 
> as an opaque object, without trying to open it up and evaluate 
> its meaning or semantics (with regard to the network or 
> otherwise).

In general I agree, but the difference of perspective is that ignoring
the fact that the well constructed handle is actually derived from
topology information does not make the issue go away. The only way you
can get the handle you are looking for is to have something below you
sort out the topological reality. I agree the current name service is
not up to the task, and I believe that we need to get that fixed.
Unfortunately it looks like it will take some very crisp architectural
guidance to get the IESG to refocus the DNS related WGs to even discuss
the issues. The sad part is that many of the participants there already
ship or run variants that help address this problem, but they have
agreed not to work on standardizing it.

> 
> >From that point of view, having an application go to the stack
> and say "given these criteria, give me a handle on the interface 
> or address I should use" or "given this DNS name, and these 
> criteria, give me a handle on the address I should use" does not 
> involve the application being topology-aware, nor does it imply 
> the application doing something evil because it picks an address 
> (or virtual interface, or...) without understanding the 
> topology.  The handle is opaque and and an abstraction -- as far 
> as the application is concerned, it doesn't have any semantics, 
> regardless of whether lower layers of the stack can derive 
> semantics from it or from whatever they (but not the 
> application) can figure out it is bound to.

This points out that I have been imprecise by claiming that passing
names are sufficient, all the criteria you would use for local
communication with the stack are the 'name' in the context of my
previous mail. 

As long as the handle stays contained within the machine you are fine.
The problem arises when the app passes that handle to another node,
believing that the topology specific information is already there. If
the blob passed were the same as what the app used to acquire the local
handle, the other nodes would acquire appropriate local handles. 

> 
> If the application is calling on, e.g., TCP, then it might pass 
> some criteria to TCP, or might not, and TCP might either pass 
> enhanced criteria to the handle-generating layer or generate the 
> handle itself.  Again, the application doesn't care -- it just 
> needs to deal with an abstraction of the criteria in application 
> terms.   I think that, from the application standpoint, it makes 
> little difference whether the criteria involve routing, speed, 
> reliability, or any of several potential QoS or security issues.

I agree as long as the app keeps the handle for local use. It can't
assume that handle will have an equivalent meaning somewhere else in the
network. If it wants to pass a handle that it believes has sufficient
topology information for the receiver to directly access the same point,
it needs to understand the topology enough to know that the meaning will
be consistent. Otherwise it needs to pass the original parameters.

> 
> I'll also be the first to admit that we have handled the set of 
> issues this implies rather badly with IPv4 (and they certainly 
> are not new issues).  We have gotten away with it because the 
> number of _hosts_ that need to understand, and choose between, 
> multiple addresses for themselves has been extremely small, at 
> least since routers were introduced into the network 
> architecture.  Because it could mostly be seen as a router 
> problem, applications in IPv4 mostly punted (for better or 
> worse).  Also, since CIDR went in, we basically haven't given a 
> damn when users who can't qualify for PI space can't multihome 
> (at least without resort to complex kludges).   IPv6, with its 
> multiple-address-per-host architecture, turns the problem from 
> "mostly theoretical" to an acute one.  It does so with or 
> without SL addresses being one of those on a host that has 
> public (global or otherwise) addresses as well.

I agree that SL simply exposed the issue. The problems arise from the
mismatched interpretation by the apps, that all 'handles' will be
treated equally by the network, and the reality that the network has
more than a single policy. As you noted earlier, this is not just about
scoping of address reachability, it also applies to QoS and any other
policy related variance in the topology. 

> 
> Finally, from the standpoint of that hypothetical application, 
> the syntax and construction of that opaque handle are 
> irrelevant.  In particular, it makes zero difference if it is a 
> name-string that lower layers look up in a dictionary or a 
> symbol table, or [the name of] some sort of class object, or 
> whether it is, e.g., an IP address.  The only properties the 
> application cares about is that it is a handle that it got from 
> somewhere that satisfies criteria it specified or that were 
> specified for it.

I understand that is what the app wants to see. I am suggesting that in
the absence of a central policy entity that can sort out all possible
topology differences, multi-party apps will need to pass the original
parameter set, and maintain local mappings to the stack handle at each
participant node. Not pretty, but closer to a working solution than
continuing to ignore the reality that the topology is inconsistent.

> ...
> See above.  Applications have gotten away with ignoring that 
> reality because the occurrences have been infrequent -- with one 
> important class of exceptions, we have had few machines with 
> multiple addresses (and multiple interfaces) since routers 
> became common in the network.  The exceptions have been larger 
> web hosts which support pre-1.1 versions of HTTP and hence use 
> one address per DNS name.  But, for them, hosts opening 
> connections to them use the address that matches the DNS name 
> (hence no need to make choices or understand topological 
> information) and the servers either use "address matching the 
> one on which the connection was opened" or an arbitrary address 
> on the interface --since the interface is the same, and its 
> connectivity is the same, it really makes no difference.  If the 
> reason for multiple addresses per host (or interface) in IPv6 is 
> to support different scopes (or connectivity, or multihoming 
> arrangements), then it does make a difference, and will make a 
> difference for a significant number of hosts.  And _that_ 
> implies a new requirement.

We could debate 'new' forever, since as you noted earlier this
requirement existed before routers, but the point is that apps can't
ignore it any longer.

> Tony, there is a difference in perspective here.  I'm going to 
> try to identify and explain it, with the understanding that it 
> has little to do with any of the discussion above, which I think 
> is far more important.  From your point of view, as I understand 
> it, this feature has been in IPv6 for a long time, no one 
> questioned it, or the problem(s) it apparently solved, for 
> equally long, some implementations were designed to take 
> advantage of it, and now people are coming along and proposing 
> to remove it without a clearly-better solution to address those 
> solved problems.  

Yes. 

> From the viewpoint of many or most of the 
> applications-oriented folks on this list, and maybe some others, 
> the applications implications of the SL idea (and maybe the 
> "multiple addresses" idea more generally) are just now becoming 
> clear.  What is becoming clear with them is that the costs in 
> complexity, in data abstraction failures, and in damage to the 
> applications view of the hourglass, are very severe and, indeed 
> that, from the standpoint of how applications are constructed, 
> SL would never have worked..    From that perspective, when you 
> argue that applications are already doing bad things, the 
> applications folks respond by saying "but IPv6 should make it 
> better, or at least not make it worse".

It is inconsistent to claim that SL would never have worked while
admitting that there are implementations that take advantage of it. But
that aside, SL is not the issue, the issue is that apps have come to
believe that the topology is flat so they can pass around handles
constructed of topology specific information. The reality all along has
been that some addresses were not reachable outside an administrative
scope of relevance. Writing those cases off in a client/server world was
mostly possible, but in the peer-to-peer world of IPv6 that is no longer
possible. 

> 
> Those differences lead to discussions about religion and 
> ideology, which get us nowhere (although they generate a lot of 
> list traffic).  It is clear to me (from my particular narrow 
> perspective) that our getting to this point at this time 
> indicates a failure on the part of several generations of IESG 
> members (probably including me).  It also identifies a series of 
> issues in how we review things cross-area (or don't do that 
> successfully) and reinforces my perception that shifting the 
> responsibility for defining standards away from a 
> multiple-perspective IESG and onto WGs with much narrower 
> perspectives would be a really bad idea.

I agree, and even restricting the cross-area review to the IESG is a bad
idea because there aren't enough cycles in that small group. 

> 
> But, unfortunate though it may be, we have gotten here.  We 
> differentiate between Proposed and Draft standards precisely to 
> make it easier to take something out that doesn't work, or 
> --more to the point in this case-- doesn't appear to do the job 
> it was designed to do at a pain level no worse than what was 
> anticipated.  I don't think essentially procedural arguments 
> about how much proof is required to take something out get us 
> anywhere at this stage.  Instead, we should be concentrating on 
> the real character of the problem we are trying to solve and 
> ways in which it can be solved without doing violence to 
> whatever architectural principles we can agree upon.

I agree to a point. My button is pushed by those that claim a technology
'creates more problems than it solves', when they simultaneously admit
they don't have a clue what problems need solving. To that end I started
a draft on what problems need solving, so we can sort out the cases that
the current technology does solve, as well as begin to identify
alternatives. IF we get to a point were there are alternatives for all
the cases people care about, we should drop the unused technology. 

Tony






[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]