the junk. Conversely, if root server traffic is an issue, getting networks to clean up their DNS traffic would be much more effective than limiting the number of TLDs.
While I find this interesting, I don't see much logical or statistical justification for the belief that, if one increased (by a lot) the number of TLDs, the amount of "invalid" traffic would remain roughly constant, rather than increasing the multiplier.
As I recall from prior surveys, the invalid traffic is largely independent of valid domains, e.g., queries from RFC1918 space (4% of all traffic at one server), repeated queries for the same nonexistent name, dynamic rDNS updates from misconfigured Windows boxes, stuff like that.
And, of course, two of the ways of having "networks [to] clean up their DNS traffic" depend on local caching of the root zone (see previous note) and filtering out root queries for implausible domains. Both of those are facilitated by smaller root zones and impeded by very large ones.
Oh, I agree. But I really don't think there's much point in worrying about root zones with millions of domains. Nothing ICANN is likely to do would raise it above thousands, and a zone with a few thousand entries should be well within the capacity of any DNS server.
Regards, John Levine, johnl@xxxxxxxx, Primary Perpetrator of "The Internet for Dummies", Information Superhighwayman wanna-be, http://www.johnlevine.com, ex-Mayor "More Wiener schnitzel, please", said Tom, revealingly. _______________________________________________ Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf