Re: Death of the Internet - details at 11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pricing and architecture of the Internet: Historical perspectives from telecommunications and transportation, Andrew Odlyzko

http://www.dtc.umn.edu/~odlyzko/doc/pricing.architecture.pdf

would give you very new perspective to all these so-called "threats".

-James Seng

Spencer Dawkins wrote:

Armondo - nice discussion.

Just a couple of notes on one of my favorite topics...

Spencer

----- Original Message ----- From: "Armando L. Caro Jr." <me@xxxxxxxxxxxxxxx>
To: "Iljitsch van Beijnum" <iljitsch@xxxxxxxxx>
Cc: "Randall R. Stewart (home)" <randall@xxxxxxxxxxxxxxxxxxxxx>;
"'ietf@xxxxxxxx' Discussion" <IETF@xxxxxxxx>
Sent: Friday, January 30, 2004 7:14 PM
Subject: Re: Death of the Internet - details at 11



This _kind_ of a solution has already been proposed by Joe Touch and

Ted


Faber in their ICNP 97 paper, "Dynamic Host Routing for Production

Use of


Developmental Networks". It works, but on of the problems is that

you hide


path information from the transport. TCP maintains congestion

control


information about a single destination, with the assumption that the

path


will not change during the connection. If it does occasionally, then

it's


fine. However, with multihoming, the change may be a common

occurance


throughtout the lifetime of a connection depending on the

application and


the use of the multiple paths (failover, concurrent multipath

transfer,


etc). So TCP (or whatever transport) should not be blind to the fact

that


data is potentially going over a different path. Otherwise, the

congestion


control parameters/algorithms will not work properly.


Yeah, you might think that. I did, when we proposed TRIGTRAN (how
could finding out that a link failed be WRONG?).

The problem was that when we were discussing notifications like
PATH_CHANGE (or whatever we ended up calling it), the TCP has a
decision to make when it sees PATH_CHANGE - "do I

- slow start, recognizing that I now haven't got the remotest clue
what the path capacity is, or

- maintain my congestion window and try to actually LOSE something
before I decide that I need to adjust?"

TCPs have been ignoring things like ICMP Unreachable for decades, so
the second idea isn't totally bogus - they respond to actual loss, not
to the potential for loss.

Once you decide that your path change is no different than cross
traffic starting and stopping, which we adjust to after losing
something, transports ignoring path changes makes a lot of sense. If
you are changing paths frequently (and round-robin would be the
limiting case), slow-starting every time you change paths is
excessive - and, if you're going to respond to PATH_CHANGE, what else
would you do? You could do congestion avoidance during the first round
trip, or actually lose something and do congestion avoidance on the
second round trip - not a big difference, conceptually.

[deleted down to]


Yes.. if you wanted to do HTTP over SCTP and use features such
as streams you WOULD WANT to extend HTML so that you could
have the client "get pages" on particular streams...

Why extend the markup language?

I think that was a typo...


Commonly-deployed browsers don't pipeline HTTP requests because they
don't get any advice on (1) size of objects or (2) recommended
retrieval order. Extending HTML to provide one or the other would make
at least as much sense as extending HTTP.


HTTP has supported sessions that stay alive so they can be used

for


more than just one operation for many years now. The trouble is

that


you get head of line blockage if you fetch all the elements that

make


up a WWW page in serial fashion. Being able to multiplex different
streams (within an SCTP session) that all fetch elements

simultenously


would fix this. This of course requires changes to both HTTP

clients


and servers, but it should otherwise be transparent to the user.


As above - fetching all elements simultaneously makes perfect sense to
transport guys, but can give pretty flaky-looking pages during
retrieval. If you think HTTP/1.1 with two connections makes sense, you
might start retrieving the smallest elements on one connection and the
largest on the other, and meet in the middle, but since the browser
has no idea how to do this, it's pretty difficult to resist the
temptation to wait for a connection to become idle before sending
another request.

And this is why HTTP over cellular links has horrible throughput - if
you stop-and-wait request 300-byte objects with one-second RTTs, your
throughputp looks a LOT like 300 bytes per second...






[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]