Re: [Gen-art] Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Authors/shepherd/WG: any further responses to Dan?

Thanks,
Alissa


On May 11, 2017, at 2:52 PM, Dan Romascanu <dromasca@xxxxxxxxx> wrote:

Hi,

Please see in-line.

Regards,

Dan


On Thu, May 11, 2017 at 8:00 PM, MORTON, ALFRED C (AL) <acmorton@xxxxxxx> wrote:
Hi Dan,
please see replies, [ACM], below.

> -----Original Message-----
> From: Dan Romascanu [mailto:dromasca@xxxxxxxxx]
> Sent: Thursday, May 11, 2017 7:06 AM
> To: gen-art@xxxxxxxx
> Cc: draft-ietf-bmwg-vswitch-opnfv.all@xxxxxxxx; ietf@xxxxxxxx;
> bmwg@xxxxxxxx; dromasca@xxxxxxxxx
> Subject: Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03
>
> Reviewer: Dan Romascanu
> Review result: Almost Ready
>
> I am the assigned Gen-ART reviewer for this draft. The General Area
> Review Team (Gen-ART) reviews all IETF documents being processed
> by the IESG for the IETF Chair.  Please treat these comments just
> like any other last call comments.
>
> For more information, please see the FAQ at
>
> <https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__trac.ietf.org_trac_gen_wiki_GenArtfaq&d=DwICaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=NTVlLBN-
> L3u9zGPHm_CNVcXW7_OGX8_18CtaAalZin0&s=2Hr-
> dhKaDHIguY7W97z33RlKjqPDtmoYmM2-jWrbS-o&e= >.
>
> Document: draft-ietf-bmwg-vswitch-opnfv-03
> Reviewer: Dan Romascanu
> Review Date: 2017-05-11
> IETF LC End Date: 2017-05-15
> IESG Telechat date: Not scheduled for a telechat
>
> Summary:
>
> Almost Ready.
>
> This document describes describes the progress of the Open Platform
> for NFV (OPNFV) project on virtual switch performance "VSPERF". That
> project reuses the BMWG framework and specifications to benchmark
> virtual switches implemented in general-purpose hardware. Some
> differences with the benchmarking of specialized HW platforms are
> identified and they may become work items for BMWG in the future. It's
> a well written and clear document, but I have reservations about it
> being published as an RFC, and I cannot find coverage for it in the WG
> charter. I also have concerns that parts of the methodology used by
> OPNFV break the BMWG principles, especially repeatability and
> 'black-box', and this is not clear enough articulated in the document.
[ACM]
Ok, let's address your specific issues, and come back to your reservations.

>
>
> Major issues:
>
> 1. It is not clear to me why this document needs to be published as an
> RFC. The introduction says: 'This memo describes the progress of the
> Open Platform for NFV (OPNFV) project on virtual switch performance
> "VSPERF".  This project intends to build on the current and completed
> work of the Benchmarking Methodology Working Group in IETF, by
> referencing existing literature.' Why should the WG and the IESG
> invest resources in publishing this, why an I-D or an Independent
> Stream RFC is not sufficient?
[ACM]
The WG considered and discussed this document over 3 revisions
and a year of time before reaching consensus to develop it further
as a chartered item, so this decision was not taken lightly.
See more below.

> The WG charter says something about:
> 'VNF and Related Infrastructure Benchmarking: Benchmarking
> Methodologies have reliably characterized many physical devices. This
> work item extends and enhances the methods to virtual network
> functions (VNF) and their unique supporting infrastructure. A first
> deliverable from this activity will be a document that considers the
> new benchmarking space to ensure that common issues are recognized
> from the start, using background materials from industry and SDOs
> (e.g., IETF, ETSI NFV).'. I do not believe that this document covers
> the intent of the charter, as it focused on one organization only.
[ACM]
I'm sorry, but here you are mistaken. The document that satisfied
the "first deliverable ... document that considers the new benchmarking space"
is: https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-05
titled: Considerations for Benchmarking Virtual Network Functions and Their Infrastructure
which has been submitted to IESG and approved for publication.
Further, the current draft (draft-ietf-bmwg-vswitch-opnfv-03)
references the approved "Considerations" draft in Section 3
(as does almost every related Industry spec I'm aware of).

The BMWG Charter continues:
  Benchmarks for platform capacity and performance characteristics of
  virtual routers, switches, and related components will follow, including
  comparisons between physical and virtual network functions. In many cases,
  the traditional benchmarks should be applicable to VNFs, but the lab
  set-ups, configurations, and measurement methods will likely need to
  be revised or enhanced.

This draft constitutes one of several follow-on efforts, approaching
the problem exactly as we described in the last sentence above.

How? What last sentence?

Is it:

'In many cases,
  the traditional benchmarks should be applicable to VNFs, but the lab
  set-ups, configurations, and measurement methods will likely need to
  be revised or enhanced.'

How does this document approach this problem? If there is a need to revise or enhance existing BMWG work, what is needed is specific revisions of documents. This informational document only documents work in one external organization. I have reservations that this is a WG task to advance this document, and of the IESG to approve it. Why can't it stand as an I-D until the WG decides whatever work needs to be undertaken (if any) to meet the OPNFV needs? Or if they with to have an RFC, why can't it be Independent Stream?

Will the WG write similar documents for all (or several) other organizations that implement VNFs one way or another? Should it?


An aspect of Industry collaboration that we did not anticipate in the
BMWG Charter is our current interactions with Open Source Communities.
The current Charter was approved in June 2014, then OPNFV was founded
on September 30, 2014 [0] and the VSPERF Project was created on
Dec 16, 2014, so we did not anticipate extensive collaboration on
this and other benchmarking topics.



>
> 2. In section 3 there 'repeatability' is mentioned, while
> acknowledging that in a virtual environment there is no guarantee and
> actually no way to know what other applications are being run.
[ACM]
See:
https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-05#section-3.4

There are certainly ways to assess the current set of processes
at a particular time. The Software configuration parameters in
Section 3.3 are intended to capture this aspect as part of set-up.
At the same time, there will be challenges to assess the DUT
performance when resources are fully shared, and new testing
strategies will be needed:
https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-05#section-3.3


> Measuring parameters as the ones listed in 3.3 provides just part of
> the answer, and they are internal parameters to the SUT.
[ACM]
Yes, knowing the tested configuration is a critical pillar
supporting repeatability (these items are not measured, but configured),
and why we provided this section.

> Also, the
> different deployment scenarios in section 4 require different
> configurations for the SUT, thus breaking the 'black-box' principle.
[ACM]
Specifying DUT configuration does not break any part of the
black-box principle, which establishes that benchmark measurements
will be based on externally observable phenomena. See:
https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-05#section-4.2

Previous BMWG RFCs have identified the critical configuration
parameters of the DUT, such as the number and type of
network interfaces, the arrangement of DUTs in a SUT, etc.

I may have not been clear enough. The document talks about repeatability and about comparing benchmarks with the ones of specialized HW implementations. Then it goes into a long but still partial list of factors that can influence the benchmarking, the majority of them depend on the HW and SW measurements and parameters of the internal systems. How can this be compared with a number of small and indeed externally observable configuration parameters like the number and type of network interfaces. This is several degrees of magnitude apart in complexity. 

> I believe that there is a need for a more clear explanation of why BMWG
> specifications are appropriate and how comparison can be made while
> repeatability cannot be ensured, and measurements are dependent upon
> parameters internal to the SUT.
[ACM]
I believe that draft-ietf-bmwg-virtual-net-05 already
indicates why the existing BMWG RFCs are a reasonable
starting place for NFV benchmarks, in part because
we want to measure the same benchmarks of physical
network functions in many cases. See
https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-05#section-4.1

Repeatability is a goal of all experiments, and we understand
that there is more work to do in this regard, but what
we know now (documented in this draft) should
be a valuable contribution to the Industry.


Measuring the same benchmarks is a good goal. I believe that the claim of repeatability needs to be better argued.
 

>
> Minor issues:
>
> 1. Some of the tests mentioned in Section 4 have no prior or in
> progress work in the IETF: Control Path and Datapath Coupling Tests,
> Noisy Neighbour Tests, characterization of acceleration technologies.
[ACM]
I'm sorry, but that's not an accurate portrayal of BMWG's literature.

https://tools.ietf.org/html/rfc6413 examined Control Plane/Dataplane
interactions, for example.

https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-05#section-3.3
item 2 specifically included Noisy Neighbour among the new
testing strategies.


Please provide references for each in text.
 
Every network interface with an ASIC is an example of acceleration,
one that we've characterized in physical network devices for years.


Yes, but we do not deal here with externally observable interfaces only, and if the characterization of the acceleration technologies matters than you need a way to express it (where can we find it in existing BMWG work? new work?)
 
> If new work is needed / proposed to be added for the BMWG scope and
> framework it would be useful for BMWG to list these separately.
>
>
> Nits/editorial comments:
>
> 1. What is called 'Deployment scenarios' from VS perspective in
> Section 4 describe in fact different configurations of the SUT in BMWG
> terms. It seems better to separate this second part of section 4 in a
> separate section. If it belongs to an existing section it rather
> belongs in 3 than in 4.
>
[ACM]
Section 3 is more about extending the configuration guidance
from https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-05#section-3.2

Section 4 summarizes the VSPERF Level Test Design document,
of which these deployment scenarios are a key part.

Yes, but this seams to belong to configurations of SUTs, even if they are called 'Deployment scenarios' in OPNFV-speak, and they impact repeatibility.
 

thanks for your comments; hopefully this detailed reply
will reduce your reservations about publication.

Al
(for the co-authors)

[0] https://www.opnfv.org/announcements/2014/09/30/telecom-industry-and-vendors-unite-to-build-common-open-platform-to-accelerate-network-functions-virtualization


_______________________________________________
Gen-art mailing list
Gen-art@xxxxxxxx
https://www.ietf.org/mailman/listinfo/gen-art


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]