On 7/5/2011 3:56 AM, Tony Finch wrote:
Mark Andrews<marka@xxxxxxx> wrote:
DNS[WB]L's have never been a good fit to the DNS, rather they have
not been a bad enough fit to require something better to be done.
Oh come off it, they are just as good a fit to the DNS as reverse DNS is.
The naive approach of reversing the address, converting to nibbles
and appending a suffix won't scale.
I don't understand why the setup is OK for reverse DNS but not blacklists.
+1
This has been an on-going issue, with a kind of 'purity' argument from some
folk. While concern for the stability of end-to-end DNS operations certainly is
essential, the degree of resistance to these added uses of the service often is,
indeed, inconsistent.
In general, the problematic logic requires a devotion to the narrow usage that
has dominated the DNS, rather than what the design of DNS was intended to
support. (Reading the original RFCs is instructive for this broader view.)
However for the current thread, there really does appear to be an impending
problem. It seems pretty clear that DNS caching needs to be tailored to the
types of applications making the queries and that mixed traffic could well mess
with caching behavior. (I think it uncontroversial to note that that having DNS
caching functioning well is an important requirement for stable and efficient
DNS use.)
For an application that is likely to encounter a different IP address for
essentially every query, across a very large number of queries, the only
solution I see available is to use a different cache.
d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf