Re: Why do we need to go with 128 bits address space ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> IGP metric is used as route preference,  

And that's the issue. Neither IGP metric nor BGP path attributes today reflect reality of true network paths. They all assure you that you can reach the destination ... without any assurance that you will get lowest jitter, lowest rtt, lowest loss etc ... 

When I look at my BGP table in NY I get path to Europe via Seattle just because such path has 1 AS less in the AS-PATH and neglect the fact that there is alternative path just next to it with +1 AS but shorter RTT of 150 ms. 

The point is that the sweet spot to make an educated choice is the edge of your network when your upstream ASes are attached to. 

Sure having addresses of various upstreams PAs space on the hosts + applying src+dst based routing in your domain makes it possible to attempt to choose which way to go at the hosts itself. But one needs to ask himself: 

*A* is this right architecture to run identical active & passive probing from 1000s of my servers to destinations customers may be coming from if all the traffic needs to traverse few ASBRs anyway ? 

*B* now assume you still think *A* is the way to go are we going to apply multiple interfaces to each VM and each docker container too such that those autonomous computing entities will again start to select their own view of the optimal exit path ? 

Many thx,
R. 

PS. 

As an additional reference take a look at Google's Espresso or Facebook's Edge Fabric or Cisco's PFR. Those are the solutions I think make most sense when it comes to assuring customers get best experience for services from a given network. And I don't think any of those work in concert with end to end multi homing principle. 


On Wed, Aug 21, 2019 at 9:56 PM Masataka Ohta <mohta@xxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Robert Raszuk wrote:

> *"Instead, APIs and applications must be modified to detect and react
> against the loss of connection."*
>
> Well it is clear that you are making an implicit assumption that quality of
> the available paths is equal and all you need to care about is end to end
> connectivity/reachability.

No, as is written in draft-ohta-e2e-multihoming-03.txt:

    Once a full routing table is available on all the end systems, it is
    easy for the end systems try all the destination addresses, from the
    most and to the least favorable ones, based on the routing metric.

    Note that end to end multihoming works with the separation between
    inter domain BGP and intra domain routing protocols, if BGP routers,
    based on domain policy, assign external routes preference values
    (metric) of intra domain routing protocols.

    One may still be allowed, though discouraged, to have local
    configuration with dumb end systems and an intelligent proxy. But,
    such configuration should be implemented with a protocol for purely
    local use without damaging the global protocol.

IGP metric is used as route preference, though, some workaround
of proxy (last paragraph) or having partial routing table on
near ISPs (not mentioned in the draft) may be necessary, until
global routing table becomes small enough to be able to be
held by ordinary hosts.

Note that at the time the draft was written, IPv6 global routing
table was small, which means, at that time, IPv6 worth deploying
despite all the flaws in it.

                                        Masataka Ohta

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux