Responding to the thread as a whole:
1) I see no evidence that HTTP/2 is suited to Web Services or will be dominant in that role. HTTP/2 was designed to serve Web Browsing to the exclusion of all other concerns. Which was the right choice to make.
Web Services that are not merely information fetch protocols rarely make any use of HTTP features at all except for framing and to increase the number of ports by using URIs as ports. Raw TCP/IP transport is no longer practical due to the large number of firewalls and NATs and 2^16 ports would be insufficient anyway.
2) Web Services over some form of QUIC are almost certain to be popular.
QUIC addresses pretty much the exact same set of concerns that the use of HTTP/1.1 for Web Services does and finesses the problem of TCP/IP being designed for a different age. If QUIC was not being developed, a lot of Web Service developers might well look at HTTP/2 but as things stand, I have zero interest in HTTP/2 because I know that it is a dead end for Web Services.
This is a *good thing* BTW. There are only two types of traffic: Web Browsing and non-Web browsing. Does it really make sense to suggest that they both over the same protocol?
3) This is the wrong time to do this work.
Failed experiments come at the cost of creating obstacles that get in the way of real progress. The same argument is made every time one of these experiments is proposed and we get the same result. During chartering the argument is "We are just concentrating on one problem, this does not exclude anyone else", After the working group is chartered it is "Hands off our turf", after the RFCs are published and nobody is implementing them it is "we tried and we proved that nobody else could have solved the problem".
There is a growing list of these failed projects, lets just pick DANE. DANE was meant to be a way to publish TLS certificates and to communicate 'must use TLS' security policy. There are numerous technical and commercial reasons that combination was doomed to fail even if the deployment platform was not DNS which makes the difficulty of everything squared. The proposal did not have support from the Web Browser providers whose co-operation was essential and killed attempts to sell DNSSEC through DNS Registrars (most of whom make their margin on WebPKI affiliate fees). Input from all stakeholders was rejected.
If you recall, CAA was originally proposed by myself and Ben Laurie of Google. The reason Ben dropped out was that I was forced to remove the security policy dimension of CAA. Now given that CAs are now required to support CAA by CABForum, doesn't this mean that there was a real cost to letting the DANE group bully us out of that space?
[This is not how I would do security policy now BTW, I would look to use the TXT records as described in RFC 6763 to incorporate the same sort of data that is specified in HTTP Key Pinning for Web Services.]
We had a re-run of the same issues with DPRIV which began with the assertion that a solution must be found within a year. This restricted the range of technology solutions to a set that were utterly inappropriate. Currently, the WG is still functioning but it has no support from any deployment stakeholder I am aware of.
The point I am making here is that DNS is the wrong layer in the stack for:
* Working Groups with a narrow scope
* Working Groups attempting to achieve a result in a narrow time scale
* Working Groups that insist on owning the topic.
* Working Groups that refuse to engage the necessary stakeholders