WG Action: Rechartered Benchmarking Methodology (bmwg)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The Benchmarking Methodology (bmwg) WG in the Operations and Management Area
of the IETF has been rechartered. For additional information, please contact
the Area Directors or the WG Chairs.

Benchmarking Methodology (bmwg)
-----------------------------------------------------------------------
Current status: Active WG

Chairs:
  Sarah Banks <sbanks@encrypted.net>
  Al Morton <acmorton@att.com>

Assigned Area Director:
  Warren Kumari <warren@kumari.net>

Operations and Management Area Directors:
  Warren Kumari <warren@kumari.net>
  Ignas Bagdonas <ibagdona@gmail.com>

Mailing list:
  Address: bmwg@ietf.org
  To subscribe: https://www.ietf.org/mailman/listinfo/bmwg
  Archive: https://mailarchive.ietf.org/arch/browse/bmwg/

Group page: https://datatracker.ietf.org/group/bmwg/

Charter: https://datatracker.ietf.org/doc/charter-ietf-bmwg/

The Benchmarking Methodology Working Group (BMWG) will continue to
produce a series of recommendations concerning the key performance
characteristics of internetworking technologies, or benchmarks for
network devices, systems, and services. Taking a view of networking
divided into planes, the scope of work includes benchmarks for the
management, control, and forwarding planes.

The scope of BMWG has been extended to develop methods for
virtual network functions (VNF) and their unique supporting
infrastructure (such as SDN Controllers and vSwitches).
Benchmarks for platform capacity and performance characteristics
of virtual routers, firewalls (and other security functions),
signaling control gateways, and other forms of gateways are included.
The benchmarks will foster comparisons between physical and virtual
network functions, and also cover unique features of
Network Function Virtualization systems. Also, with the emergence
of virtualized test systems, specifications for test system
calibration are also in-scope.

Each recommendation will describe the class of network function, system,
or service being addressed; discuss the performance characteristics that
are pertinent to that class; clearly identify a set of metrics that aid
in the description of those characteristics; specify the methodologies
required to collect said metrics; and lastly, present the requirements
for the common, unambiguous reporting of benchmarking results.

The set of relevant benchmarks will be developed with input from the
community of users (e.g., network operators and testing organizations)
and from those affected by the benchmarks when they are published
(networking and test equipment manufacturers). When possible, the
benchmarks and other terminologies will be developed jointly with
organizations that are willing to share their expertise. Joint review
requirements for a specific work area will be included in the
description of the task.

To better distinguish the BMWG from other measurement initiatives in the
IETF, the scope of the BMWG is limited to the characterization of
implementations of various internetworking technologies
using controlled stimuli in a laboratory environment. Said differently,
the BMWG does not attempt to produce benchmarks for live, operational
networks. Moreover, the benchmarks produced by this WG shall strive to
be vendor independent or otherwise have universal applicability to a
given technology class.

Because the demands of a particular technology may vary from deployment
to deployment, a specific non-goal of the Working Group is to define
acceptance criteria or performance requirements.

An ongoing task is to provide a forum for development and
advancement of measurements which provide insight on the
capabilities and operation of implementations of inter-networking
technology. Ideally, BMWG should communicate with the operations
community through organizations such as NANOG, RIPE, and APRICOT, and
consult with/inform other IETF WGs (as is the current practice).

Milestones:

  Aug 2018 - Methodology for Next-Gen Firewall Benchmarking to IESG Review

  Dec 2018 - Update to RFC2544 Back-to-back Frame Benchmarking to IESG Review

  Dec 2018 - Methodology for EVPN Benchmarking to IESG Review

  Dec 2018 - Draft on Selecting and Applying Model(s) for Benchmarking to
  IESG Review

  Dec 2018 - Draft on General VNF Benchmarking Automation to IESG Review

  Dec 2018 - Considerations for Benchmarking Network Virtualization Platforms
  to IESG Review





[Index of Archives]     [IETF]     [IETF Discussion]     [Linux Kernel]

  Powered by Linux