RE: what the "scope" disagreement is about

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Theodore Ts'o wrote:
>> The real network is not a single flat routing space.
> 
> And this is where we disagree.  For better or for worse, the 
> market is demanding that IP addresses that can be treated as 
> belong to a single flat routing space.  

The market demand is for stable addresses that can be routed globally.
That does not mean that the network manager wants to globally route
every prefix.

> How else do you 
> explain the demand for provider independent addresses, and 
> people punching holes in CIDR blocks so they can have have 
> multihoming support for reliable network service?

The market demand is for address space that is STABLE. This stability
must isolate the network manager from the whims of any ISP, as well as
prevent the network manager from being locked into a provider to avoid
the cost of renumbering. That does not equate to a demand that every
prefix be globally routed, or that every node be globally accessible.
The network manager demands the control to decide that. 

The IETF app community is demanding that to keep their view of the world
pristine and simple, they get to define the world as a flat routing
space, and claim they are passing around opaque data structures, at the
same time they fully intend for the receiver to use the opaque object as
a topology locator. 

> 
> One solution for that would be to not do multihoming, and 
> simply have servers live on multiple IP addresses belonging 
> to multiple ISP's, and use multiple DNS 'A' records in order 
> to provide reachability.  I suspect that would be Tony's 
> solution about what we should be doing today. 

That scenario is inconsistent. You start with 'not do multihoming', but
then describe multihomed servers. 

> This is 
> perhaps workable for short-term http connections, but it's 
> absolutely no good for long-term TCP connections, which won't 
> survive a service outage since TCP commits the "sin" of using 
> IP addresses to identify its endpoints, instead of using DNS 
> addresses....  

I did not say that using an IP address was a sin, just that doing so
without understanding the topology that you are describing is a broken
model.

> But whether this is the reason, or the whether 
> there are other reasons why the "solution" of killing off 
> provider independent addresses and letting the DNS sort it 
> out has been perceived as unacceptable, it's pretty clear 
> that the market is spoken.  Even as people have been wagging 
> their fingers and saying "horrible, horrible", customers are 
> demanding it, and ISP's are providing it.  This is the 
> situation in IPv4, and I very much doubt the situation is 
> going to change much in IPv6.

I agree that PI is in market demand, and have a couple of personal
drafts on the topic. I saw this as a failing in the current PA or
Private, allocation model and figured we needed to get it dealt with.

> 
> It's certainly true that having a reliable end-point 
> identifier is critical.  But I don't think the DNS is it. 

One of my points awhile back was that the current DNS is not up to the
task, and neither are the other name resolution services. Those need to
be addressed at the same time as the applications that insist in doing
the work for themselves.
 
> The DNS has been abused in many different ways, and very 
> often, thanks to split DNS games, and CNAMES, and all the 
> rest, the name which the user supplies to the application is 
> also not guaranteed to be a name which can be utilizable by C 
> when B wants to tell C to connect to A:
> 
> >      ---- A ----
> >        |      I
> >        |      n
> >        |      t
> >        I      e ---- C
> >        2      r
> >        |      n
> >        |      e
> >        |      t
> >      ---- B ----
> 
> Tony is basically saying, "IP addresses don't work for this, 
> so let's bash application writers by saying they are broken, 
> and tell them to use DNS addresses instead".  

I am saying that the app either needs to learn the topology, or pass the
task off to a service that has the means to learn it. Passing around
addresses that are useless is not an approach to solving the problem.

> Well, I'm here 
> to point out that DNS addresses don't work either.  
> Applications get names such as "eddie.eecs", and even when 
> they get a fully qualified domain name, thanks to a very 
> large amount of variability in how system administrators have 
> set up split-DNS, there is no gauarantee that a particular 
> DNS name is globally available, or even globally points at 
> the same end point.  So if IP addresses are not a flat 
> routing space, DNS names are not a flat naming space, either.  

Which simply points out we have work to do, that won't get done by
avoiding the issue.

> 
> I struggled for a while to come up with ways of coming up 
> with a "canonical DNS name" which could be passed around to 
> multiple hosts many years ago when I was trying to come up 
> with a convenient way to construct canonicalized, globally 
> usable Kerberos principal names from host specifiers that 
> were supplied by the user on the command line. We ran up 
> against the same problem.  Fundamentally, the DNS wasn't and 
> isn't designed to do this.

So the IESG needs to task the DNS community with fixing it. The reality
of the deployed network is that the topology is inconsistent from
different perspectives. If all the 'identifier to topology' resolving
processes expect the world to be flat, they are simply broken.

> 
> Now, I suppose you could say that the people who "broke" DNS 
> are fault, but there are also people who would say that the 
> people who broken the flat routing space assumption (which 
> while not universally true was true enough for engineering 
> purposes) are a fault instead. Perhaps a more constructive 
> thing to say is that the original Internet architecture --- 
> and here I mean everything in the entire protocol stack, from 
> link layer protocols to application level protocols --- were 
> not well engineered to meet the requirements that we see 
> being demanded of us today.

Or even 20 years ago. The issue was ignored in the past because the
demand was low enough to allow that. 

> 
> This is why I believe that ultimately 8+8 is the most 
> interesting approach.  As the old saw goes, "there is no 
> problem in computer science that cannot be solved by adding 
> an additional level of indirection".  

8+8 and similar schemes suffer one of the same problems you are
complaining about. To make them work, the network needs a very fast
converging, global, identifier to locator mapping service. Mangling the
address field in the packet only allows the app to use part of the
address for the identifier. There is no reason that the identifier has
to be part of the address.

> 
> What we need is something that sits between DNS names and 
> provider-specific IP addresses.  That is a hole in the 
> architecture which today is being fixed by using 
> provider-independent addresses, much to the discomfort of 
> router engineers.  Another solution, which has been 
> articulated by Tony, is that we should sweep all of this dirt 
> under the DNS carpet instead, and force the application 
> writers to retool all their implementations and protocols to 
> pass DNS names around instead.  But the DNS really isn't 
> suited to handle this.  What we need is something in-between.

The problem will not be solved by PI addressing. I don't care if apps
pass around addresses, as long as they are doing the work to make sure
the address is consistent with the topology the app is intentionally
describing. If the app is not going to learn about topology (and I agree
the app shouldn't) it needs to rely on a service that has the means to
do the job. We agree, the current DNS is not up to the task. 

Tony




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]