Reviewer: Stewart Bryant Review result: Ready with Nits I am the assigned Gen-ART reviewer for this draft. The General Area Review Team (Gen-ART) reviews all IETF documents being processed by the IESG for the IETF Chair. Please treat these comments just like any other last call comments. For more information, please see the FAQ at <https://trac.ietf.org/trac/gen/wiki/GenArtfaq>. Document: draft-ietf-bmwg-sdn-controller-benchmark-term-07 Reviewer: Stewart Bryant Review Date: 2018-01-30 IETF LC End Date: 2018-02-02 IESG Telechat date: Not scheduled for a telechat Summary: Generally a well written document. The various comments I make are mostly editorial with some on the fringe of being technical. Major issues: None Minor issues: 2.3.1.3. Asynchronous Message Processing Rate Definition: The number responses to asynchronous messages (such as new flow SB> That should be the number of responses per second. Discussion: As SDN assures flexible network and agile provisioning, it is important to measure how many network events the controller can handle at a time. This benchmark is obtained by sending asynchronous messages from every connected Network Device at the rate that the controller processes (without dropping them). SB> So what you are testing here is the control network and the SB> controller. This is perhaps the only practical way to run the SB> test, but it seems a pity that you do not deconvolve these SB> two aspects of the test. SB> SB> I suppose this is really network Async Msg Proc rate rather than SB> controller Async proc rate. SB> SB> We may get to this in the companion document, but doesn't there SB> need to be some standardization of the event message to compare SB> apple with apples over time? Nits/editorial comments: Abstract A mechanism for benchmarking the performance of SDN controllers is defined in the companion methodology document. SB> It would be convenient to the reader to provide the reference to or name of SB> the companion document 2.2.4. Number of Cluster nodes Discussion: This parameter is relevant when testing the controller performance in clustering/teaming mode. The number of nodes in the cluster MUST be greater than 1. SB> I see what you are saying, but you may wish to clarify that this SB> constraint does not apply all the time. For example one of two nodes SB> may start later than another, or fail, or maybe I worry over nothing here. 2.3. Benchmarking Terms This section defines metrics for benchmarking the SDN controller. SB> Should that be controller(s)? 2.3.1.4. Reactive Path Provisioning Time Definition: The time taken by the controller to setup a path reactively between source and destination node, defined as the interval starting with the first flow provisioning request message received by the controller(s), ending with the last flow provisioning response message sent from the controller(s) at it Southbound interface. Discussion: As SDN supports agile provisioning, it is important to measure how SB> Should that be When, rather that As since not all will support the feature. 2.3.2.1. Control Sessions Capacity Measurement Units: SB> Surely this should be in units of sessions? 2.3.2.2. Network Discovery Size Measurement Units: N/A SB> How can this be N/A surely it is a number of network units of various type. 2.3.2.3. Forwarding Table Capacity 2.3.3. Security 2.3.3.1. Exception Handling Measurement Units: N/A SB> Shouldn't that be as per base performance test specified in 2.3.1? SB> or text similar to 2.3.3.2 UoM? 7. Security Considerations Security issues are not discussed in this memo. SB> Whilst true, you do of course instrument various security related SB> parameters