Re: Death of the Internet - details at 11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pete,

I think the _attempt_ and _effort_ to get a solution to the persistent connection problem is entirely worthwhile and did not mean to suggest otherwise. I think that ignoring or delaying an easier, and still important, problem while we work the persistent connection one borders on irresponsible. And that distinction is the only one I was attempting to make.

We may still disagree, of course.

john


--On Wednesday, 28 January, 2004 13:09 -0600 Pete Resnick <presnick@xxxxxxxxxxxx> wrote:


On 1/28/04 at 12:39 PM -0500, John C Klensin wrote:

The reality is that there is very little that we do on the
Internet  today that require connection persistence when a
link goes bad (or  when "using more than one IP address").
If a connection goes down,  email retries, file transfer
connections are reconnected and the  file (or the balance of
the file if checkpointing is in use) is  transferred again,
URLs are refreshed, telnet and tunnel connections  are
recreated over other paths, and so on.  It might be claimed
that  our applications, and our human work habits, are
designed to work at  least moderately well when running over
a TCP that is vunerable to  dropped physical connections.

Would it be good to have a TCP, or TCP-equivalent, that did
not have  that vunerability, i.e., "could preserve a
connection when using  more than one address"?  Sure, if the
cost was not too high on  normal operations and we could
actually get it.  But the goal has  proven elusive for the
last 30-odd years -- at least in the absence  of running with
full IP Mobility machinery all of the time, which  involves
its own issues -- and, frankly, I'm not holding my breath.

I am rather ambivalent about this issue (it seems like the obvious thing to do, but also seems quite painful to accomplish), but I do think there is something missing in this response: "The cost" to which you refer needs to be weighed against the cost of *not* doing so, and that cost seems to have been mounting all along and shows no sign of slowing down. The fact is that we have had to engineer all sorts of application-layer solutions to this single problem and will continue to do so for new application-layer protocols into the future. Worse yet, some of those solutions continue to include ridiculously high-cost solutions such as having to retransmit entire files, and my guess is such costs (bandwidth and otherwise) will continue in the future. I also think that the argument ignores the possibility that if we do address the "connection persistence" problem, we will be able to do many things at the application layer that we have always avoided doing because of the cost of having to engineer around it. From the view up here in the nosebleed section, it seems like it is worth at least the attempt to get a solution.






[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]