Keith Moore schrieb: > 1. suitability of the DNS data and query model. right now this protocol > essentially communicates one bit of information to be used in a decision > - i.e. whether the address or domain name is good or bad. I suspect This is wrong. For todays DNSxLs, many queries return multiple bits of information (eg to indicate a match on multiple lists, or varying levels of trustworthiness). For some examples, have a look at "25_dnsbl_tests.cf" of your nearest SpamAssassin installation. > much information. (It also bugs me that multiple queries are required > to get the information necessary to perform a bounce/relay decision and > to bounce a message... granted that this is a flaw in the DNS protocol This is also wrong for the reason given above. > 3. security. it might be that mechanisms already defined or used with > DNS (DNSSEC, source port randomization) are adequate, but I'd like to > see more analysis done. Spammers have a high interest to circumvent / subvert / hack / hijack / destroy propagation mechanisms of DNSxLs. In all the many years, they have failed. I would conclude that the security of the mechanism is not so bad. > 4. effects of DNS caching. if a host is removed from a blacklist it > should arguably be removed from all caches instantly, but DNS isn't Chapter 4 of the draft is dedicated to this chapter. > 5. slippery slope. DNS is a vital service, and one that is very > difficult to replace. It needs to remain focused on a narrow goal. The > more we overload DNS, the more we threaten to add complexity that will > make it more fragile. A single nameserver of dnswl.org has a DNS traffic of roughly 10 GByte per day, and I would guess that is a lot smaller than what a well-known blacklist has. So far, the extensive use of DNS does not seem to have destabilized DNS. Keith, I fail to see a convincing argument in this discussion. -- Matthias _______________________________________________ Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf