> Dampening is part of the protocol and has nothing to do with the speed > of light. Well, not really. Assume a simplistic model of the Internet with M "core" routers (in the default free zone) and N "leaf" AS, i.e. networks that have their own non-aggregated prefix. Now, assume that each of the leaf AS has a "routing event" with a basic frequency, F. Without dampening, each core router would see each of these events with that same frequency, F. Each router would thus see O(N*F) events per second. Since events imply some overhead in processing, message passing, etc, one can assume that at any given point in time there is a limit to what a router can swallow. In either N or F is too large, the router is cooked. Hence dampening at a rate D, so that N*F/D remains lower than the acceptable limit. Bottom line, you can only increase the number of routes if you are ready to dampen more aggressively. There is an obvious "tragedy of the commons" here: if more network want to "multi-home" and be declared in the core, then more aggressive dampening will be required, and each of the "multi-homed" networks will suffer from less precise routing, longer time to correct outages, etc. There are different elements at play that also limit the number of core routers. Basically, an event in a core router affects all the path that go through it, which depending on the structure of the graph is somewhere between O(M*log(M)) or O(M.log(M)). In short, the routing load grows much faster than linearly with the number of core routers. -- Christian Huitema _______________________________________________ Ietf@xxxxxxxx https://www1.ietf.org/mailman/listinfo/ietf