Re: Gen-Art IETF LC review: draft-ietf-ipfix-testing-04.txt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Joel,

Apologies for not responding sooner to your review, as it came right 
ahead of the -00 and -nn cutoffs.

Please see some responses inline.


> I have been selected as the General Area Review Team (Gen-ART)
> reviewer for this draft (for background on Gen-ART, please see
> http://www.alvestrand.no/ietf/gen/art/gen-art-FAQ.html ).
> 
> Please resolve these comments along with any other Last Call comments
> you may receive.
> 
> Document: Guidelines for IP Flow Information eXport (IPFIX) Testing
> Reviewer: Joel M. Halpern
> Review Date: 15-Feb-2008
> IETF LC End Date: 26-Feb-2008
> IESG Telechat date: N/A
> 
> Summary: This document needs some additional work before publication as 
> an informational RFC.
> I would particularly recommend considering addressing at least the first 
> comment below prior to RFC publication.
> I would also suggest that the test descriptions need some clarification 
> as described in the technical section below, particularly items 5 and 6.
> 
> Comments:
> 
> Conceptual:
> 1) While the document is being published as an information RFC, the 
> wording of the abstract and introduction make it seem that this document 
> is actually defining conformance to the IPFIX RFCs.  The IETF has 
> generally carefully steered clear of defining such conformance.
> So, while publishing a useful test suite is probably a good idea, I 
> strongly recommend fixing the wording of at least the abstract and 
> introduction to make it quite clear that these are not mandatory tests, 
> and that these tests do not define conformance.

Sure, the tests are not mandatory. However, if you were purchasing an 
IPFIX device which did not claim to be compliant with this draft, you 
might rightly ask why not when all the IPFIX implementations we know of 
to date have been.


> Related to this, please do not assert (in section 3) that passing this 
> test suite constitutes conformance to the IPFIX architecture and 
> protocol.  (Among other things,test suite passage proves nothing about 
> architectural conformance.)

OK.


> Technical:
> 2) In the terminology section, an Observation Point is defined simply as 
> a place where packets can be observed.  An Observation Domain is a 
> collection of Observation points.  Then, in the middle of the definition 
> of an Observation domain it say "In the IPFIX MEssage it generates..." 
> but up till now none of the things that have been defined generate IPFIX 
> messages.  It is possible that the "it" in the quote is supposed to be 
> the "Metering Process" mentioned in passing earlier in the definition. 

Correct, it is.


> But the English grammar does not lead the reader to such a conclusion. 
> Later in that same definition, it beings to appear that an Observation 
> Domain (which is a collection of points, not a process or entity) is 
> supposed to generate IPFIX messages, since it is supposed to include a 
> Domain ID in the messages it generates.  This definition for an 
> Observation Domain needs to be reworked, to avoid confusing the Domain 
> with the Measurement Process which is running in / for / on the Domain.

The Metering Process generates flow records which the Exporting Process 
makes into IPFIX messages.

This whole section is lifted directly from RFC5101, as stated right at 
the top of section 2:

    The terminology used in this document is fully aligned with the
    terminology specified in [RFC5101] which is reproduced here for
    reference.


> 3) The use of capital "MUST" in section 3.1 is almost certainly wrong. 
> Firstly, what I think that section is saying is that being able to 
> correctly perform the basic tests is a precondition for being able to 
> perform further test successfully.  Thats a precondition, not a "MUST".

These are basic connectivity tests. If they don't pass then there's no 
point in proceeding with the later tests, since you don't even have a 
basic connection.

So yes, these initial tests are a precondition for the later ones. In 
effect they MUST pass before proceeding.


> Of lesser significance, this document does not provide any description 
> of what it means by "MUST".

Right above the TOC, as ever:

    Conventions used in this document

    The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
    "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
    document are to be interpreted as described in [RFC2119].


> We are usually careful about how such 
> language is used in informational RFCs.  I think the meaning would be 
> clearer if the real intent were stated.  I suspect that some readers of 
> this review may find my concern here pedantic.  But the continual use of 
> MUST in the document really, really bothers me. (I hope the next comment 
> helps explain why it bothers me so much.)
> 
> 4) Then, the test descriptions go on to keep using this language.  This 
> is a test suite description document.  Simply state how to run the test. 
>  There is no need for "MUST".  Section 3 should indicate that the test 
> descriptions describe the preconditions and steps that the tester goes 
> through.  So section 3.1 would begin "The tester creates one Exporting 
> Process and one collection process, configures the Exporting Process to 
> ..."

The -00 version of this draft didn't use any RFC 2119 language. eg, 
section 3.1 began:

    Set up one Exporting and one Collecting Process.  Configure the
    Exporting Process to send to the Collecting Process.

However, we received feedback that RFC 2119 language should be used so a 
tester could clearly see what needed done, and could tick off compliance 
with each point as he worked through.

Reworking the document like this is not an insignificant task.


> 5) It is not clear what test steps like ~The tester ensures that an SCTP 
> association is established.~ (Or worse, the actual text which reads "the 
> test MUST ensure that an SCTP association is established." are supposed 
> to do.  Is this an instruction to the tester to use network management 
> tools or CLI to verify a connection on both devices?  Is it an 
> instruction to perform additional configuration?  How does the tester 
> "ensure".

That would very much depend on the implementation, don't you think?


> A test suite should tell a tester what steps to undertake, 
> and what observations to perform.  "Ensure" is not either one of those.

Testing and verifying SCTP are quite beyond the scope of this draft. 
However, ability to bring up an SCTP association is a necessary 
prerequisite for the following tests.


> 5a) To elaborate on this issue, in the middle of the test step about 
> ensuring that Data Records are actually exported, we finally get a 
> testable instruction, to whit, use a packet sniffer and check that the 
> packets are coming by.

Sadly, that was only a suggestion of how the tester might perform this task.


> 6) I believe I understand how a tester would create templates, for the 
> template test.  But how is the tester to create data sets.  Particularly 
> data sets with specific properties, such as the padding in section 3.2.3 
> and 3.2.4?

Again, that would very much depend upon the implementation being tested. 
Some implementations may add padding by default, or may have a switch to 
allow padding to be optional. It certainly wasn't an issue at our IPFIX 
interops.


> The best conclusion I can come to is that this is a 
> collector test, and that it assumes a packet generator which can 
> generate IPFIX packets.

That's a good option for testing the Collecting Process - though it 
wouldn't verify the Exporting Process. Another possiblity would be for 
the tester to inject known data into the Metering Process, either 
directly or by passing known traffic through the Observation Point(s).


> Having such a device in a test setup makes 
> sense.  But the test description does not say "configure a packet 
> generator to generate an IPFIX packet with ..."  (There are other ways 
> to say this, but there needs to be some description of how testers are 
> expected to create data sets.)

Again, this seems to be quite implementation dependent. I expect what 
you'd do for a PC based implementation would be quite different from 
what you'd for for a router based implementation.


> 6a) Related to this, I find reading this document rather odd.  I have 
> read many test suites for protocols and implementations of protocols. 
> They generally focus on a Device (or implementation, or entity) Under 
> Test, and the framing around that Device.  This suite appears to be 
> trying to test two interacting devices simultaneously.  That is 
> extremely difficult, and extremely confusing.

The IPFIX protocol connects an Exporting Process (source) to a 
Collecting Process (destination). One is needed in order to test the 
other - or at the very least, something that does a good job of 
pretending to be an Exporter or Collector.

eg, an Exporting Process won't export anything until it's able to bring 
up an SCTP association. So if you're going to inject packets (rather 
than have an Exporting Process) then you'll first need to do some SCTP 
negotiation. All in, it's most straightforward to connect an Exporter to 
a Collector.


> It is particularly hard 
> because then the tester doesn't have enough points of control to perform 
> the tests and observe the results meaningfully.  It is possible that 
> this combined suite is right for this problem.  But if so, a lot of 
> explanation of why it is done that way and how the tester is to 
> accomplish his goals is needed.
> 
> Minor:
> 7) The abstract is worded as if one could not perform interoperability 
> testing without first running the tests in this document.  While having 
> run the tests in this document will presumably increase the chances of a 
> successful interoperability test, they are not an inherent requirement 
> for such testing.

We had three IPFIX interops, with this document being drafted after the 
first. I believe the pre-requisite for the second and third interops was 
that these basic tests had been run, so as to ensure a common baseline 
and rule out basic issues which had already been covered in previous 
interops.


> 8) I would probably be inclined to lighten up the Motivation section a 
> bit.  Or even remove it.  I don't think we need to explain why test 
> suites are useful.  If we really need a motivation section, then it 
> should explain something about why it is particularly complex to test 
> IPFIX implementations (if it is) and thus why the IETF feels it is 
> particularly useful to publish a test suite ourselves in this case.

OK.


> 9) The definition of Transport Session is actually the definition of 
> various kinds of transport sessions, and how they are identified.  Could 
> the definition start with an actual definition please. (I.e. the 
> communication over time used to carry X between Y and Z?  Or something.)

Again, the definition is copied directly from RFC 5101.


> 10) As an editorial matter, most testers I have worked with strongly 
> prefer if every step in a test is explicitly separate and named / 
> numbered.  That way, they can check off each step as it goes.  So the 
> beginning of 3.1.1 would be
> i) Create One Exporting Process
> ii) Create One Collection  Process
> iii) Configure the Exporting Process ...

In effect, each of our MUSTs is such a check. At least, that was the 
intention.


> 11) It is particularly odd to see a set of Stress/Load tests that 
> simultaneously claim to be measuring conformance and to not specify the 
> level of Stress / Load.  Having a description of how to perform load 
> tests is useful.  But its relationship to the other tests is confusing. 
>  (This obviously is helped once we no longer claim that this is a 
> conformance test.)

These tests verify what IPFIX devices do when overloaded, rather than 
testing that they're able to handle a certain load level. It's clearly 
impossible for us to state specific traffic levels since a) the overload 
level may vary enormously from device to device, and b) we're not 
interested in the specific level, but in the device's ability to handle 
extremes gracefully.


Thanks.
-- 
Paul Aitken
Cisco Systems Ltd, Edinburgh, Scotland.
_______________________________________________
IETF mailing list
IETF@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]