RE: [narten@xxxxxxxxxx: PI addressing in IPv6 advances in ARIN]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Brian E Carpenter wrote:
> ... 
> Scott Leibrand wrote:
> ..
>  > I agree, especially in the near term.  Aggregation is not required
> right
>  > now, but having the *ability* to aggregate later on is a prudent risk
>  > reduction strategy if today's cost to do so is minimal (as I think it
> is).
> 
> I think that's an understatement until we find an alternative to
> BGP aggregation. That's why my challenge to Iljistsch was to simulate
> 10B nodes and 100M sites - if we can't converge a reasonable sized
> table for that network, we *know* we have a big problem in our
> future. Not a risk - a certainty.
> 

The problem with your challenge is the lack of a defined topology. The
reality is that there is no consistency for topology considerations, so the
ability to construct a routing model is limited at best. 

The other point is that the protocol is irrelevant. Whatever we do the
architectural problem is finding an aggregation strategy that fits a routing
system in hardware that we know how to build, at a price point that is
economically deployable. 

As far as I am concerned BGP is not the limitation. The problem is the ego
driven myth of a single dfz where all of the gory details have to be exposed
globally. If we abolish that myth and look at the problem we are left with
an answer where BGP passing regional aggregates is sufficient. Yes there
will be exception routes that individual ISPs carry, but that is their
choice not a protocol requirement. Complaining that regional aggregates are
sub-optimal is crying wolf when they know they will eventually loose to the
money holding customer demanding non-PA space. The outcries about doom and
gloom with PI are really about random assignments which would be even less
optimal. 

The fundamental question needs to be if there is an approach to address
allocation that can be made to scale under -any- known business model, not
just the one in current practice. It is not the IETFs job to define business
models, rather to define the technology approaches that might be used and
see if the market picks up on them. Unfortunately over the last few years
the process has evolved to excluding discussions that don't fit in the
current business models, despite the continuing arguments about how those
models are financial failures and need to change. The point that Scott was
making is that there are proposals for non-random assignments which could be
carried as explicit's now and aggregated later. What we lack is a forum to
evaluate the trade-off's. 

Tony 



_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]