I find it curious that much of the debate has been about a particular “DNS consumer” use case; I think dns-over-http can do more (and as was pointed out, application APIs seem a bit out of scope). If you think of the DNS “flow” as consumer — client — resolver server — auth (just to make my example above clearer; in case I am mixing up the proper terms used here, another example): application — OS stub resolver — [network] — bind/powerdns resolver/… — bind/nsd/powerdns auth/… While I’d like access to lower level DNS information in browser javascript as much as everyone else, limiting “doh” as just a facility to bypass the OS resolver (and maybe the resolver server) seems a bit limited. I think the target should be something that could (eventually) be used in the rest of the “flow”, too. >From the dnsoverhttp group last November: > On Nov 22, 2016, at 20:47, Shane Kerr <shane@xxxxxxxxxxxxxxxxxxx> wrote: > > One thing that really stuck with me at the dnsoverhttp bar-BoF was when someone said that DNS is adopting features that look like HTTP (DNS sessions handling, DNS RR server-side push, etc.), and that HTTP is adopting features that look like DNS (sending certificate chains, providing address information, etc.). The someone was me. My thought is that if we’re changing the DNS protocol (or adding another…) there might be a big practical advantage in building on the huge implementation investments being made in QUIC. The “value” in DNS is the delegation and data model, not the current “bits on the wire” protocol. For future features (privacy, encryption between resolvers and auth, etc) maybe a different protocol would be appropriate. (Still quoting Shane) > I wonder if our models are starting to break down? Having two protocols > with seemingly very little in common adopting the same features kind of > implies that our abstractions are broken, right? It is (eventually) working out for IPv6! :-) Ask