it sounds like you've built some kind of implicit ambient authority into the connection after the first request/response occurs.. that kind of connection based auth is a bit of an anti-pattern in the http world. (I was going to say despite a couple prominent examples, but actually it might be because of a couple prominent examples). You really have very little control over what actual tcp connections individual transactions occur on in an end-to-end sense when you take into consideration (forward and reverse) proxies including things like Anti-Virus MITMs - the connection can't do a good job of conveying state.
For similar reasons, SETTINGS based profiles aren't going to be especially effective - SETTINGS is hop to hop and profiling a whole server (instead of the URIs used for your application) means the origin can't serve anything else (efficiently). Also consider h1 pipelines which might very well mean your existing approach doesn't have the synchronization point you were hoping for.
Given all of that, you're much better building a session mechanism at the application level.. as ugly as cookies are, they are the typically way to do that in a consistent manner in both h1 and h2. That's going to be way more robust (and performant) in the long run.
hope that helps.
On Tue, Oct 3, 2017 at 5:58 PM, Michael Richardson <mcr+ietf@xxxxxxxxxxxx> wrote:
I'm seeking guidance. It might be that this is a job for lwig, or perhaps
there is already a document which details how to do the profiling I care
about. If so, please point me at that.
In ANIMA's BRSKI protocol, which is based upon RFC7030 EST, we run over HTTP
1.1 today using a single TCP connection.
It's HTTP over TLS ("HTTPS") [h2c?] rather than an upgrade.
In BRSKI there are state issues with both the TLS and the HTTP process.
The TLS server certificate is not trusted by the client until after at least
one RESTful interaction occurs. Only once that has occured is the connection
considered secure.
We are concerned about how the protocol could become broken if used with
HTTP2 with interleaving of requests/responses. With HTTP (port-80)
transactions, using the RFC7540 section 3.2 upgrade, one could just decline
to ever do the upgrade and be done. With HTTPS, this is negotiated in
the TLS connection, so one can go to HTTP2 directly, I think.
While limiting ourselves to HTTP/1.1 might be a good policy, I worry that
at some point in 10 years, the HTTP/1.1 code might be in the library only to
support our use case, with the application code on the device having moved on
to HTTP2 only.
What I think we are looking for is a set of things we can specify that
will make HTTP2 as deterministic (and slow!) as HTTP/1.1. For instance,
I think that
SETTINGS_MAX_CONCURRENT_STREAMS = 1
would do 90% of it. Should we turn off PUSH?
Can/should we recommend lower values for window sizes, etc?
(ANIMA end points are not constrained in the IoT/RFC7228 sense, but rather
are control plane CPUs, with megabytes of ram, often partitioned at boot
time. But, ANIMA BRSKI may also applies to Web connected devices like the
baby monitors that have wreaked havoc.)
--
Michael Richardson <mcr+IETF@xxxxxxxxxxxx>, Sandelman Software Works
-= IPv6 IoT consulting =-