Re: Death of the Internet - details at 11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John C Klensin <john-ietf@xxxxxxx> wrote:
> --On Wednesday, 28 January, 2004 07:36 +0900 Dave Crocker 
> <dhc@xxxxxxxxxxxx> wrote:
> 
>> In other words, when there is a serious solution to
>> multihoming -- ie, being able to preserve a connection when
>> using more than one IP Address -- it will likely work for IPv4.
> 
> Actually, that definition changes the problem into a much harder 
> one,

   Preserving a connection when using more that one IP address is
not _necessarily_ a much harder problem -- especially if we
stipulate that tunneling is a legitimate middleware operation.

> The reality is that there is very little that we do on the Internet 
> today that require connection persistence when a link goes bad... 

   But we certainly _should_ be doing things that would greatly
benefit from connection persistence when a link goes bad.

> It might be claimed that our applications, and our human work
> habits, are designed to work at least moderately well when running
> over a TCP that is vunerable to dropped physical connections.

   Alternatively, one might claim our work habits have "evolved"
to work moderately well...

> Would it be good to have a TCP, or TCP-equivalent, that did not 
> have that vunerability, i.e., "could preserve a connection when 
> using more than one address"?  Sure, if the cost was not too 
> high on normal operations and we could actually get it.  But the 
> goal has proven elusive for the last 30-odd years...

   Might we do well to consider _why_ this is so?

> By contrast, the problem that I find of greatest concern is the 
> one in which, if I'm communicating with you, and one or the 
> other of us has multiple connections available, and the 
> connection path between us (using one address each) disappears 
> or goes bad, we can efficiently switch to a different 
> combination... even if all open TCP connections drop and have to 
> be reestablished in the interim.

   If I understand, John is looking for applications-level link
redundancy, which strikes me as unlikely to be easy to deploy.

> For _that_ problem, we had a reasonably effective IPv4 solution
> (at least for those who could afford it) for many years -- all
> one needed was multiple interfaces on the relevant equipment
> (the hosts early on and the router later) with, of course, a
> different connection and address on each interface. 

   Aren't we now talking what John said "changes the problem into
a much harder one" -- namely preserving connection when using
more than one IP address?

> But, when we imposed CIDR, and the address-allocation restrictions
> that went with it, it became impossible for someone to get the
> PI space that is required to operate a LAN behind such an
> arrangement (at least without having a NAT associated with the
> relevant router) unless one was running a _very_ large network.

   A /20 is _not_ "very large" -- just impractical to justify for
small-scale projects. (Thus, the allocation policies prevented
much of the small-scale experimentation which normally comes in
the early stages of design.)

> Now, I'll stipulate this is a routing problem as much, or more, 
> than it is an address-availability problem. 

   I'm not sure I agree. It's true that address-availability
policies were driven by routing problems. One _could_ consider
the route-filtering policies to be "a routing problem", but this
doesn't strike me as useful.

> And I'll also agree that there appears to be little evidence
> that IPv6 is significantly more routing-friendly than IPv4

   Agreed.

> and hence, that any real routing-based solutions that help the
> one will help the other.  But,
>   (i) if any of the options turn out to require an
> 	approach similar to the one that continue to work for
> 	big enterprises with PI space in IPv4, then we are going
> 	to need (lots) more address space.  And
>  (ii) If any of the "multiple addresses per host" or
> 	"tricks with prefixes" approaches are actually workable
> 	and can be adequately defined and implemented at scale
> 	--and there is some evidence that variations of them can
> 	be, at least for leaf networks-- then they really do
> 	depend on structure and facilities that appear to me to
> 	are available in IPv6 and not in IPv4.

   This gives the impression of overstating your case. Indeed, there
_will_ be solutions which require lots more IPv4 space; and there
will be solutions which depend on structure of IPv6. But these will
have to compete with other solutions which need neither.

> So, for the problem I was referring to (but perhaps not for your 
> much more general formulation), I stand by my comment and 
> analysis.

   I won't attempt to restate your analysis. But I think your
analysis is too narrow. You quite ignore the tricks which many
smaller ISPs can perform -- especially when they cooperate.

   We have a genuine problem in that we'd like something immediately
scalable -- and only larger ISPs can immediately reach large numbers
of users -- and larger ISPs impose arbitrary and capricious limits
on what their customers can do.

   Nonetheless, larger ISPs will have so many problems deploying
IPv6 that I have no confidence they _can_ make it native to their
operations within five years. OTOH, if we need to tunnel it over
IPv4, the situation really isn't different from any other tunneling
over IPv4...

>> Most of these proposals are quite new.  No more than a year
>> old and many less than 6 months.

   I'd love to investigate them...

>> This does not speak well for anything happening immediately, of
>> course.  However quite a number of the proposals do not
>> require any significant infrastructure change.  This bodes
>> well for rapid deployment, once they make it through the
>> standards process.

   I rather suspect many of them could be deployed _before_ they
make it through the standards process. ;^)
 
>> On the other hand, getting the IETF to produce standards track
>> specifications out of this large pack of candidates could take
>> another 10 years...

   Exactly!

> Yes.  And it may speak to the IETF's sense of priorities that 
> the efforts to which you refer are predominantly going into the 
> much more complex and long-term problem, rather than the one 
> that is presumably easier to solve and higher leverage.

   There is a great tendency to broaden the scope of a project as
you add more people to it; and a very rational tendency to extend
the time-frame being considered as the time-to-approach-consensus
appears to grow. Lamenting this won't change it.

   Where we go seriously wrong is requiring short-timeframe efforts
to coordinate their work with long-timeframe efforts.

--
John Leslie <john@xxxxxxx>


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]