RE: Stupid NAT tricks and how to stop them.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian,

What you wrote is doubly incorrect.
First, you missed the context:

>> Noel Chiappa wrote:
>> Needless to say, the real-time taken for this process to complete
>> - i.e. for routes to a particular destination to stabilize, after a
>> topology change which affects some subset of them - is dominated by
>> the speed-of-light transmission delays across the Internet fabric.
>> You can make the speed of your processors infinite and it won't make
>> much of a difference.

> Christian Huitema wrote:
> Since events imply some overhead in processing, message passing,
> etc, one can assume that at any given point in time there is a
> limit to what a router can swallow.

This is true indeed, but a) this limit has everything to do with
processing power and available bandwidth and nothing to do with speed of
light and b) the context was talking about infinite processing power
anyway.


> Bottom line, you can only increase the number of routes
> if you are ready to dampen more aggressively.

There is no close relation. Dampening affects route that flap. If the
new routes don't flap, all that is required is more memory to hold them
and slightly more CPU to perform lookups but not much more as the
relation between lookup time and size is logarithmic. Read below for
handling routes that flap because they indeed do.


> There is an obvious "tragedy of the commons" here: if more network
> want to "multi-home" and be declared in the core, then more aggressive
> dampening will be required, and each of the "multi-homed" networks
will
> suffer from less precise routing, longer time to correct outages, etc.

Again I don't see a relation here. Assuming that the newer prefixes in
the core flap about as much as the current ones, what is required to
handle more is to increase computing power and bandwidth in order to
keep what a router can swallow under the limit it takes a hike.

> There are different elements at play that also limit the number of
> core routers. Basically, an event in a core router affects all the
> path that go through it, which depending on the structure of the graph
> is somewhere between O(M*log(M)) or O(M.log(M)). In short, the routing
> load grows much faster than linearly with the number of core routers.

I agree; the relation between processing power requirements and number
of prefixes is somehow exponential, but back to the real world:

Years there was a frantic forklift upgrade business to get the biggest
baddest BFR from vendors even before the paint was dry, and this
happened because indeed we were starving for more CPU and more memory.

This does not happen today. As Stephen points out, even the little guys
aren't complaining anymore and vendors don't even put the latest
technology they can in their products because nobody's screaming for it
anymore.

In short: the IPv6 idea of reducing the size of the routing table was
necessary if IPv6 had been deployed and replaced v4 5 years ago. We have
missed the launch window and as of today this problem solved by time; I
hear that we could handle a million prefixes with today's technology.

If it takes a THz processor to handle 10 million prefixes and a 100THz
one to handle 100 million prefixes, I don't care as long as said
processors are on the shelf at Fry's for $200 a piece and on a vendor's
Sup for $100K a piece.

Michel.


_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]