Re: /.well-known (RFC5785) vs versioned APIs [and EST/RFC7030]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wed, May 1, 2019 at 2:55 PM Nico Williams <nico@xxxxxxxxxxxxxxxx> wrote:
On Sun, Feb 24, 2019 at 11:29:10PM -0500, Phillip Hallam-Baker wrote:
> So lets take a DNS configuration for my Mathematical Mesh service:
>
>    _mmm._tcp.example.com SRV host1.example.com 0 10 80 host1.example.com
>    _mmm._tcp.example.com SRV host2.example.com 0 40 80 host2.example.com
>    _mmm._tcp.example.com TXT "version=1.0-2.0"

Also, does _tcp imply HTTP/1.1 in your example?

And if you want HTTP/3, will you add:

|    _mmm._udp.example.com SRV host1.example.com 0 10 80 host1..example.com
|    _mmm._udp.example.com SRV host2.example.com 0 40 80 host2..example.com
|    _mmm._udp.example.com TXT "version=1.0-2.0"

Web services barely make any use of HTTP beyond the end point specifier. So I don't really see a reason to use HTTP/3. But if you were going to use it, that is how I would specify it.

A better outcome in my view would be to develop one (or possibly more) presentation layer designed expressly for Web Services. So TXT "version=1.0-2.0 tran=https,http3,nwsp"

https = http/1.1 + TLS
http3 = http/3 over TLS 
nwsp = some new protocol possibly QUIC based or something else based.

People keep pointing out that layering everything over http is a bad idea. Well, I was one of the original authors of HTTP and I agree. HTTP/1.0 is arguably an OK but not great Web Services transport. None of the changes to HTTP since have been designed to support Web Services.

 
Now you're doing quite a few DNS lookups.  If you need to convey that
host1 supports only version 1 and host2 only version 2, you might find
yourself with a transport x version cartesian product of DNS lookups to
do.  This is bad.

I am doing fewer lookups than I would if I was also doing TLSA though. One of my recommendations to the deployment workshop is to recognize that when a WG output fails to thrive, this should be noted and the way cleared for proposals that are designed for deployment. The DANE WG insulted the core constituencies required for deployment success. The BULLRUN team couldn't have sabotaged the effort better if they tried.

The way I see it, we need to adopt a two stage strategy. The first step being to make is possible to express all the information we need to express in the form of widely supported DNS records. TLSA is not and will never be widely supported by Internet hosting providers as long as they sell DNS registrations at cost hoping to make their profits off selling WebPKI certs. Structuring DANE the way it was has not only killed deployment of TLSA records, it has made it very very difficult to deploy any new records..

Stage two is to provide improved ways of delivering the DNS information. The DNS data model is what it is and changing it would take decades at the least. But we can and are replacing the client-resolver protocol and can probably change the resolver-authoritative protocol.

What I proposed with Omnibroker is that we should replace all the client-resolver lookups with a single request/response 'what do I need to know to talk service X to service Y' and back would come a set of weighted IP addresses and OCSP tokens. 

So the resolver which is trusted but unaccountable today would expand its function and remain trusted but be auditable and accountable.
 
I'd much rather do something like this:

     _mmm._discovery.$DOMAIN SRV host1.example.com ...
     _mmm._discovery.$DOMAIN SRV host2.example.com ...
     ; path is /mmm on all of the hosts in the SRV RRset:
     _mmm._discovery.$DOMAIN TXT "path=/mmm"
     host1.$DOMAIN     A 10.0.1.1
     host2.$DOMAIN     A 10.0.1.2

then GET /mmm/discovery at either host1 or host2, which should be a
static resource with all the configuration information needed for the
mmm application.

I think that modulo the use of _discovery instead of _tcp, that approach is already specified in the doc but not necessarily called out. 

The key point is that we need to pick either one way to do this or a very small number of ways to do this so that DNS resolvers can choose their additional records wisely.


Incidentally, one change to DNS that could give immense leverage would be an extension allowing multiple UDP responses to a single packet. I did this in Omnibroker because I can always fit requests into a single packet (since a DNS name is limited length) but I frequently need multiple responses.

So imagine if we had an extension to DNS so that a client can say 'I can handle multiple responses'. And this would allow the service to send the kitchen sink in return to a response. The first time a client makes a request of the resolver, it returns an RR that says 'I support multi-query'. The next time the client makes a request it goes like this

Client request: _mmm._discovery.example.com RR=DISCO
Resolver response:

Packet 1: 20+1 additional records
Packet 2: 23+1 additional records
Packet 3: 8+1 additional records

The basic idea being that instead of truncating responses that are too long, we simply spread them across three packets that are all smaller than the MTU so as to avoid fragmentation. DISCO is a portmanteau record that means 'return me a complete host connection chain for this service'

The +1 record contains the information telling the client that there are three packets in total to look for. 

This approach will be blocked by some firewalls but this should not be an issue since your DNS resolver should either be inside the firewall or be tunneling. But every response still looks like a regular DNS response and should not be blocked by most ISP configurations.

If I was going to spec this out, I would make sure the +1 record at least has a slot for a TSIG


I have not done any measurements but I am sure someone will tell me if I am wrong. It is my very strong suspicion that if a server returns a sequence of 1-16 UDP packets that >95 % of the time either all of the packets will get through or none will.

UDP was a lot flakier when DNS was designed and there was a real chance of packet loss and so partial retransmit would have made sense. I strongly suspect that for the modern internet, we can simply resend the entire section.


It goes without saying of course that definition of DISCO should work with DOH because it is a discovery chain that is going to be needed at the other end.

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux