Key difference between DCVPN and L2VPN/L3VPN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think that the Charter should have some text to identify the key differences between DCVPN and L2VPN/L3VPN. If the key differences aren't properly described, all the protocols developed for L2VPN and L3VPN will pop up as potential solutions. 

Here are some key differences I see:
- Majority of VMs (or End-Systems) do communicate with external peers, even though their volume may not be large. Most external peers access their VMs (or End-Systems) in Data Centers via internet or IP network until packets reach the Data Center gateway. 

- Majority of VMs are end systems, vs. nodes in L2VPN/L3VPN are not. 


 
Linda Dunbar

> -----Original Message-----
> From: ietf-bounces@xxxxxxxx [mailto:ietf-bounces@xxxxxxxx] On Behalf Of
> Stewart Bryant
> Sent: Monday, April 23, 2012 10:58 AM
> To: nvo3@xxxxxxxx
> Cc: iesg@xxxxxxxx; IETF Discussion
> Subject: Re: [nvo3] WG Review: Network Virtualization Overlays (nvo3) -
> 23-Apr-2012 update
> 
> Based on the list discussion, I have updated the draft NVO3
> charter to take into consideration the  feedback received
> so far.
> 
> - Stewart
> 
> 
> NVO3: Network Virtualization Over Layer 3
> 
> Chairs - TBD
> Area - Routing
> Area Director - Stewart Bryant
> INT Area Adviser - TBD
> OPS Area Adviser - TBD
> 
> Support for multi-tenancy has become a core requirement of data centers
> (DCs), especially in the context of data centers supporting virtualized
> hosts known as virtual machines (VMs). Two key requirements needed
> to support multi-tenancy are traffic isolation, so that a tenant's
> traffic is not visible to any other tenant, and address independence,
> so that one tenant's addressing does not collide with other tenants
> addressing schemes or with addresses used within the data center itself.
> Another key requirement is to support the placement and migration of
> VMs anywhere within the data center, without being limited by DC
> network constraints such as the IP subnet boundaries of the
> underlying DC network.
> 
> An NVO3 solution (known here as a Data Center Virtual Private
> Network (DCVPN)) is a VPN that is viable across a scaling range of
> a few thousand VMs to several million VMs running on greater
> than 100K physical servers. It thus has good scaling properties
> from relatively small networks to networks with several million
> DCVPN endpoints and hundreds of thousands of DCVPNs within a
> single administrative domain.
> 
> Note that although this charter uses the term VM throughout, NVO3 must
> also support connectivity to traditional hosts e.g. hosts that do not
> have hypervisors.
> 
> NVO3 will consider approaches to multi-tenancy that reside at the
> network layer rather than using traditional isolation mechanisms
> that rely on the underlying layer 2 technology (e.g., VLANs).
> The NVO3 WG will determine which types of  service are needed by
> typical
> DC deployments (for example, IP and/or Ethernet).
> 
> NVO3 will document the problem statement, the applicability, and an
> architectural framework for DCVPNs within a data center
> environment. Within this framework, functional blocks will be defined
> to
> allow the dynamic attachment / detachment of VMs to their DCVPN,
> and the interconnection of elements of the DCVPNs over
> the underlying physical network. This will support the delivery
> of packets to the destination VM, and provide the network functions
> required for the migration of VMs within the network in a
> sub-second timeframe.
> 
> Based on this framework, the NVO3 WG will develop requirements for both
> control plane protocol(s) and data plane encapsulation format(s), and
> perform a gap analysis of existing candidate mechanisms. In addition
> to functional and architectural requirements, the NVO3 WG will develop
> management, operational, maintenance, troubleshooting, security and
> OAM protocol requirements.
> 
> The NVO3 WG will investigate the interconnection of the DCVPNs
> and their tenants with non-NVO3 IP network(s) to determine if
> any specific work is needed.
> 
> The NVO3 WG will write the following informational RFCs, which
> must be substantially complete before rechartering can be
> considered:
>      Problem Statement
>      Framework document
>      Control plane requirements document
>      Data plane requirements document
>      Operational Requirements
>      Gap Analysis
> 
> Driven by the requirements and consistent with the gap analysis,
> the NVO3 WG may request being rechartered to document solutions
> consisting of one or more data plane encapsulations and
> control plane protocols as applicable.  Any documented solutions
> will use existing IETF protocols if suitable. Otherwise,
> the NVO3 WG may propose the development of new IETF protocols,
> or the writing of an applicability statement for a non-IETF
> protocol.
> 
> If the WG anticipates the adoption  of the technologies of
> another SDO, such as the IEEE, as part of the solution, it
> will liaise with that SDO to ensure the compatibility of
> the approach.
> 
> 
> Milestones:
> 
> Dec 2012 Problem Statement submitted for IESG review
> Dec 2012 Framework document submitted for IESG review
> Dec 2012 Data plane requirements submitted for IESG review
> Dec 2012 Operational Requirements submitted for IESG review
> Mar 2012 Control plane requirements submitted for IESG review
> Mar 2012 Gap Analysis submitted for IESG review
> Apr 2012 Recharter or close Working Group
> 




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]