That's an excellent question, but I think like so many others it has to fall under the judgement of the person writing the implementation report. Is it OK to just test 2 implementations or is it important to test 2 servers and 2 clients? It might be possible to go to an interoperability forum and test 15 different implementations, yet if that's a protocol for which there's a sizable *other* community that doesn't implement a required feature, that ought to be noted in the implementation report. I'm hoping that by putting the onus on the writer of the report to carefully characterize interoperability, that we can encompass many such judgement questions. On the flip side, if we tried to address every such judgement question, we couldn't possibly foresee every corner case. Do you have any suggestions for criteria that could be broadly applicable and useful? Thanks, Lisa On Tue, May 26, 2009 at 1:35 AM, Stephane Bortzmeyer <bortzmeyer@xxxxxx> wrote: > On Thu, May 21, 2009 at 11:09:01AM -0700, > The IESG <iesg-secretary@xxxxxxxx> wrote > a message of 23 lines which said: > >> The IESG has received a request from an individual submitter to consider >> the following document: >> >> - 'Guidance on Interoperation and Implementation Reports ' >> <draft-dusseault-impl-reports-02.txt> as a BCP > > It's a fine and useful document. But something is missing: how to > select the implementations tested when there are "many". For RFC 4234 > (mentioned in the I-D), some implementations were tested > <http://www.ietf.org/IESG/Implementations/RFC4234_implem.txt> and some > were not. On what criteria? > > _______________________________________________ > Ietf mailing list > Ietf@xxxxxxxx > https://www.ietf.org/mailman/listinfo/ietf > _______________________________________________ Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf