Re: Quic: the elephant in the room

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I made essentially the same argument to people at Google BEFORE the DANE group was ever proposed. The answer they gave me then has not changed.

DANE was chartered over a decade ago. Individuals in that group made clear that no input was going to be accepted from anyone who was a part of the WebPKI world. The tactics used were intentionally exclusionary. Those of us pointing out the limitations of the proposed approach were never given a fair hearing.

It is now ten years since DANE was chartered. DANE has failed beyond the very limited application to SMTP which was never really covered. The argument for DANE was weak before LetsEncrypt was launched and it seems highly unlikely future events are going to change that.


The TLSA model is simply the wrong model. If you want to change the way discovery is done, you need to think about the full discovery chain. And not least because DOH is another factor in this mix.

DANE and DPRIV should have all been a part of one effort that should have included the introduction of SRV records for HTTP.

The key weakness in the DNS protocol is that the client-resolver interaction is of an entirely different character to resolver-authoritative and the current client-resolver interaction only supports requests for one record at a time. 


The kick off meeting for DPRIV was quite interesting. There should be standing IESG instructions to prohibit chartering of any WG that is premised on the need to finish something in a year, so it is essential that it be done in exactly the way the proposers have already decided. Even more so when the results from DPRIV could be easily foreseen from the fact it was the same group that had hijacked DNS security policy with DANE.

I don't claim to always do the right thing but there are some folk who seem to keep doing the same thing with the same outcome and for some reason they get priority over folk who have actually built stuff that was successfully deployed and is in use.



On Sun, Apr 11, 2021 at 2:14 PM Michael Thomas <mike@xxxxxxxx> wrote:


On 4/11/21 10:23 AM, Salz, Rich wrote:
  • I don't see why [DNS timeouts] it can't be long lived, but even normal TTL's would get amortized over a lot of connections. Right now with certs it is a 5 message affair which cannot get better. But that is why one of $BROWSERVENDORS doing an experiment would be helpful.

There are use-cases where a five-second DNS TTL is important.  And they’re not amortized over multiple connections from *one* user, but rather affect *many* users.  Imagine an e-commerce site connected to two CDN’s who needs to switch.

The worst case is that it devolves into what we already have: 5 messages assuming NS records are cached normally.

Another approach using current infrastructure would be for the client to cache the certs and hand the server cert the fingerprint(s) in the ClientHello and the server sends down the chosen cert's fingerprint instead of the cert which could get it back to 3 messages too. That would require hacking on TLS though (assuming that somebody hasn't already thought of this). That has the upside is that it's the server chooses whether it wants to use the cached version or not too.

Mike



[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux