I'm all in favor of experiments, but the interface and ease of use are not the only evaluation criteria for deciding on tools. Gateways to standards are good for some use cases but have their own attack surfaces. If there are going to be more experiments, do some experimental design. What are the criteria to evaluate? Information leakage? Moderation tools? DoS defense? Administration costs? Integration ability with 3rd party tools? Standards where available and complete? Unfortunately, the policy issues around suppressing disruptive behavior depends on the details of the implementation. We know about mailing list abuse and the difficulties of deciding which party hit the other one first and who gets a timeout; the more rapid the conversation, the more likely you'll see conflicts arise. And please use the enterprise edition, and run it on IETF hosts, accounts, etc. Test the installation and dependencies and any "phone home" attempts. I think running experiments with informed consent of the experimental subjects would likely point toward finding WG interim meetings before experimenting with all of IETF. -- https://LarryMasinter.net https://interlisp.org