--On Monday, December 09, 2013 22:39 -0500 Phillip Hallam-Baker <hallam@xxxxxxxxx> wrote: >... >> For a similar reason removal of TLD's can't happen as people >> can still graft on namespace and establish TA's for the >> grafted on namespace. > > > It is trivial to fix when the validation is taking place in a > service in the cloud (aka a resolver). > > Rather less easy to do if people drink the DANE cool-aid and > do the job at the end point. > > Now you can take this point as either arguing against doing > DANE or considering the risk and deploying the appropriate > control. But you do have to consider it. > > > What you are in effect asserting is that the resolver > providers are the apex of the trust chain and so there is a > diffuse trust surface rather than a sharp point. Which is true > when the validation takes place in the resolver. I am probably going to regret getting involved in this thread, but I would draw two rather different conclusions from the above and a number of other comments: (1) We have seriously oversold DNSSEC as a data quality and reliability mechanism when it is merely a transmission integrity mechanism. The former is about the DNS and associated registration database (e.g., "whois") records being accurate, secure, and maybe even information-containing. The latter is merely about an assurance that the data one receives hasn't been altered in transit in the DNS. Now most of the people who have been involved in the design and implementation of DNSSEC have been quite careful about the above, at least most of the time. But sometimes they are sloppy about their language, sometimes they say "DNNSEC" and people hear "DNS Security" and make inferences to data quality. More important, there is an (illogical) chain of reasoning from "DNSSEC is in use" to "[now] the DNS is secure" to "all of the data provided by the DNS or its supplemental databases are of high quality". While the integrity checks of DNSSEC provide some protection against some types of attacks on the "data quality" part of the DNS environment, the attacks they protect against are very difficult. An attacker with the resources to apply them would almost certainly find it easier, less resource-expensive, and harder to detect to attack registry databases (before data are entered into DNS zones and signed), registrar practices, or post-validation servers. Non-technical attacks, such as the oft-cited hypothetical NSL, are easily applied at those points as well -- much more easily than tampering with keys or signatures. (2) In a different version of some of the comments on the thread, the "where to validate" question is important. If one tries to validate at the endpoints, endpoint systems, including embedded ones, should have the code and resources needed to validate certs and handle rollovers, even under hostile conditions, and that isn't easy. If one relies on intermediate, especially third-party, servers to validate, than much of the expected integrity protection is gone... and the number of times such servers have been compromised would make this a non-theoretical problem even without concerns about governmental-type attacks (NSL and otherwise) on those servers. No easy solutions here. I don't know where that combination of situations leaves initiatives like DANE, but I suspect we should be looking at trust conditions and relationships a lot more carefully than the discussions and claims I've seen suggest we have been doing. john