RE: [nvo3] Last Call: <draft-ietf-nvo3-framework-06.txt> (Framework for DC Network Virtualization) to Informational RFC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ken,

 

If you prefer to call it “inter-subnets and/or domains routing” instead of gateway, it would be fine with me.

 

IMHO, the NVO3 framework draft should cover this key function of data center networking.

 

The illustrated example is only to show the normal across subnets/domains traffic pattern, to show why it is necessary for NVO3 to address inter-subnet/domain issues.

The implication of inter-subnet/domain routing:

-          If a NVE performs the inter-subnet/domain routing, the NVE needs to have the needed policy. Some NVEs may have the needed policies for all VNs in DC, some NVEs may only have portion of the policies, some NVEs may not have any.

-          For NVEs adjacent to the “DC Gateway” in the Figure 1 of the framework draft have to maintain the mappings for all hosts/VMs in DC. For large DCs with Tens of Thousands (or hundreds of thousands) VMs, maintaining those mapping can be very intensive.

 

 

Hope the authors can also address those technical issues I brought up:

 

-       Section 2.3.2. L3 NVE Providing IP/VRF-like forwarding

RFC4364 is about IP over MPLS network, i.e. the underlay is MPLS network. But NVO3’s underlay is IP.

 

-       Page 13 Second bullet: the text says that each VNI is automatically generated by the egress NVE. Isn’t each VNI supposed to match a local virtual network (e.g. represented by VLAN)? When the same NVE acts as Ingress NVE for the attached VMs, aren’t the VNI for ingress direction statically provisioned? Why need automatic egress VNI generation?

 

-       Section 3.2 Multi-homing: LAG is usually used to bundle multiple ports on one device. In the multi-homing environment, there are multiple NVEs. Besides, LAG and STP in the multi-homing environment can’t really prevent loop. You will need something like TRILL’s Appointed Forwarders to prevent loops in multi-homing environment.

 

 

 

Linda

 

From: Ken Gray (kegray) [mailto:kegray@xxxxxxxxx]
Sent: Friday, May 23, 2014 7:31 AM
To: Linda Dunbar
Cc: ietf@xxxxxxxx; IETF-Announce; nvo3@xxxxxxxx
Subject: Re: [nvo3] Last Call: <draft-ietf-nvo3-framework-06.txt> (Framework for DC Network Virtualization) to Informational RFC

 

Linda, 

 

Ultimately, your "gateway" function is manifested as a forwarding entry.  You're describing the mechanics of how that entry is derived.  While there are aspects of map distribution that might affect those mechanics - that are specific elements of nvo3, I don't think describing them at this point in the doc was the author's purpose (or necessary).  

 

It appears to me that your comment is much more simple - that we have omitted (the obvious) "you probably need to route between subnets and/or domains".  Which I suggest as the more generic text if you think their example is insufficient.

 

Why illustrate A method of doing so? Why jump down the rabbit hole of "all possible ways this could manifest?  None of that can be codified by this document.

 


Sent from my iPhone


On May 22, 2014, at 4:19 PM, "Linda Dunbar" <linda.dunbar@xxxxxxxxxx> wrote:

Key,

 

Comments are inserted below:

 

From: Ken Gray (kegray) [mailto:kegray@xxxxxxxxx]
Sent: Thursday, May 22, 2014 2:58 PM
To: Linda Dunbar; ietf@xxxxxxxx; IETF-Announce
Cc: nvo3@xxxxxxxx
Subject: Re: [nvo3] Last Call: <draft-ietf-nvo3-framework-06.txt> (Framework for DC Network Virtualization) to Informational RFC

 

Personally, I think encap/decap manipulation is the essence of the "gateway" or inter-virtual network communication.  To me, technically, all the NVEs are gateways of a sort.  You're just specifying that the permutations are variant (the network isn't homogenous in it's encap)?  

 

[Linda] The encap/decap is only sending packets within one VN. For example,

when a host “a” in subnet “A” sends a packet to host “b” in subnet “B”, the MAC address from “a” is actually the gateway MAC address for subnet “A”. So the Ingress NVE is the NVE to which that “a” is attached, and the egress NVE is the one that “A” gateway is attached (which are possibly collocated). The Gateway terminates the MAC header from host “a”, and relays the packet to “B” VN (i.e. subnet), add a new MAC header (with DA=
“b” MAC, SA=Gateway MAC, and the associated VLAN), and sends out the newly constructed Ethernet packet. The NVE to which the Gateway is attached (or collocated) has to resolve the egress NVE to which “b” is attached, encap the packet and sends to the “b” NVE. 

 

Some NVE can support Gateway function, i.e. IRB function as described in the Section 2.2. of

http://datatracker.ietf.org/doc/draft-yong-nvo3-frwk-dpreq-addition/.

 

 

 

More to the point, your drawing seems to imply that some sort of additional functionality other than a header fix-up in the NVE is required (Gateway function above the NVE).  While what is depicted  CAN be an implementation of a gateway, logically … some control function (not necessarily local on that device …orchestration, controller, operator) could dictate/impose such a translation/transcription onto NVE at different points in the infrastructure and achieve the same result/effect.  

 

[Linda] The “gateway” can be embedded in some NVEs. But many NVEs can’t support gateway functions. In order to support gateway function, the NVE has to have the inter-subnet policies, or access the Firewalls.

For example, subnet A can send packets to subnet B, but A can’t send to C. If NVEs are on servers, they may not have the needed policy to relay traffic between two different subnets. The data packets may need to be sent to the designated gateways.  The framework draft should address those issues.  

 

So, if it's a generic drawing …we don't need specific examples (IMO) unless we want to rathole on implementations to avoid inferring one is preferred.  If you feel a need for a generic conceptional representation, then I'd advise tagging it as such and not associating any implementation details (which is what I think your yong doc pointer does).

 

[Linda] IMHO, the generic drawing should have components to handle inter-VN communication, instead of simple clone to L2VPN.

 

Linda

 

 

From: Linda Dunbar <linda.dunbar@xxxxxxxxxx>
Date: Thursday, May 22, 2014 2:35 PM
To: "ietf@xxxxxxxx" <ietf@xxxxxxxx>, IETF-Announce <ietf-announce@xxxxxxxx>
Cc: "nvo3@xxxxxxxx" <nvo3@xxxxxxxx>
Subject: Re: [nvo3] Last Call: <draft-ietf-nvo3-framework-06.txt> (Framework for DC Network Virtualization) to Informational RFC

 

I have sent comments several times within NVO3 WG, but my comments hadn't been properly addressed.

 

So, I am sending them again; hopefully those comments can be addressed during the IETF last call.

 

NVO3 is about virtualization for data center (as stated in its charter statement). Inter-subnets communication (or inter Virtual Networks communication) is a big part (if not major part) of data centers traffic. Hosts in one subnet frequently communicate with hosts in different subnets or peers external to DC.

 

Yet the current framework draft focuses so much on encapsulating/decapsulating TS traffic, making NVO3 a lot like L2VPN or L3VPN MPLS network. Since IETF already has dedicated WGs for L2VPN and L3VPN, the NV03 WG should focus more on how inter-subnet communication is achieved in the overlay environment.

 

Here are the suggested changes:

 

-       The current Figure2 (Generic Reference model of NVo3) is like a clone to L2VPN. The NVO3 context reference model needs to add a gateway entity, as shown below, for relaying traffic from one VN to another VN.

                              _,....._

                           ,-'        `-.

                          /   External  `.

                         |     Network   |

                         `.             /

                           `.__     _,-'

                               `''''

                                  |

                             +---------+

                             | Gateway |

                             +----+----+

                             +----+----+

                             |   NVE   |

                             +-----+---+

       +--------+                  |                          +--------+

       | Tenant +--+               |                     +----| Tenant |

       | System |  |               |                    (')   | System |

       +--------+  |          ................         (   )  +--------+

                   |  +-+--+  .              .  +--+-+  (_)

                   |  | NVE|--.              .--| NVE|   |

                   +--|    |  .              .  |    |---+

                      +-+--+  .              .  +--+-+

                      /       .              .

 

 

-       There are good inter virtual network descriptions in http://datatracker.ietf.org/doc/draft-yong-nvo3-frwk-dpreq-addition/. The content from this draft should be included in the general framework, especially:

 

2.2. L2-3 NVE Providing IP Routing/Bridging-like Service (Framework Addition)

L2-3 NVE is similar to IRB function on a router [CIRB] device today. It supports the TSes attached to the NVE (locally or remotely) to communicate with each other when they are in a same route domain, i.e. a tenant virtual network. The NVE provides per tenant virtual switching and routing instance with address isolation and L3 tunnel encapsulation across the core. The L2-3 NVE supports the bridging among TSes that are on the same subnet and the routing among TSes that are on the different subnets.

 

 

 

-       Section 2.3.1. L2 NVE Providing Ethernet LAN-Like services:

Need to add a paragraph to address that great amount of traffic in DC is across VN (or across subnets).  Need to describe how across subnet is performed.  E.g. relayed at L2/L3 gateway.

 

-       Section 2.3.2. L3 NVE Providing IP/VRF-like forwarding

RFC4364 is about IP over MPLS network, i.e. the underlay is MPLS network. But NVO3’s underlay is IP.

 

-       Page 13 Second bullet: the text says that each VNI is automatically generated by the egress NVE. Isn’t each VNI supposed to match a local virtual network (e.g. represented by VLAN)? When the same NVE acts as Ingress NVE for the attached VMs, aren’t the VNI for ingress direction statically provisioned? Why need automatic egress VNI generation?

 

-       Section 3.2 Multi-homing: LAG is usually used to bundle multiple ports on one device. In the multi-homing environment, there are multiple NVEs. Besides, LAG and STP in the multi-homing environment can’t really prevent loop. You will need something like TRILL’s Appointed Forwarders to prevent loops in multi-homing environment.

 

 

Linda

-----Original Message-----
From: nvo3 [mailto:nvo3-bounces@xxxxxxxx] On Behalf Of The IESG
Sent: Wednesday, May 21, 2014 9:33 AM
To: IETF-Announce
Cc: nvo3@xxxxxxxx
Subject: [nvo3] Last Call: <draft-ietf-nvo3-framework-06.txt> (Framework for DC Network Virtualization) to Informational RFC

 

 

The IESG has received a request from the Network Virtualization Overlays WG (nvo3) to consider the following document:

- 'Framework for DC Network Virtualization'

  <draft-ietf-nvo3-framework-06.txt> as Informational RFC

 

The IESG plans to make a decision in the next few weeks, and solicits final comments on this action. Please send substantive comments to the ietf@xxxxxxxx mailing lists by 2014-06-04. Exceptionally, comments may be sent to iesg@xxxxxxxx instead. In either case, please retain the beginning of the Subject line to allow automated sorting.

 

Abstract

 

 

       This document provides a framework for Network Virtualization

       Overlays (NVO3) and it defines a reference model along with logical

       components required to design a solution.

 

 

 

 

The file can be obtained via

 

IESG discussion can be tracked via

 

 

No IPR declarations have been submitted directly on this I-D.

 

 

_______________________________________________

nvo3 mailing list

 


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]