On Fri, 2005-08-05 at 19:51 +0200, Henrik Nordstrom wrote: > On Sun, 31 Jul 2005, Benjamin Carlyle wrote: > > I've collected some research to date on several aspects of how the > > protocol will be handled by proxies, but my chief unknown at present is > > how the returned updates will be handled. My model is that a new 1xx > > series return code be introduced to carry these messages, and that a > > typical subscription consist of (request using SUBSCRIBE method (or > > similar), 1xx response * n, final non-1xx response). My hope is that > > these can be intermingled on a single persistent connection as follows, > > in a conversation that is in some ways similar to pipelining: > > 1) SUBSCRIBE resourceA HTTP/1.1 > > 2) receive 1xx response with initial state information for A > > 3) SUBSCRIBE resourceB HTTP/1.1 > > 4) receive 1xx response with initial state information for B > > 5-n) receive 1xx responses intermingled for both A and B > This is not a valid HTTP sequence. There must be some form of non-informal > response to the first SUBSCRIBE requests before the next request can be > processed. I was beginning to think this might be the case. > For a protocol to work correctly over HTTP it must > > a) Not depend persistent on connections. HTTP connections are by > definition hop-by-hop transport links, not end-to-end. (8.1.3) > > b) Each request must be self-contained (logical from 'a') only relying > on it's own data and connected state information known to be possessed by > the server at the time the request was sent. If there is session sequence > dependency on other requests then a request must not rely on > information/actions not yet acknowledged by a non-informal message by the > server. (8.1.2.2) > > c) Replaying of the same request should not be harmful if a safe or > indempotent method is used (9.1) > > d) Slight reordering of requests may be expected. This applies both on > pipelined and requests sent over separate connections. A proxy is allowed > to split a pipeline and send the requests in parallell on separate > connections if it desires. I've done a small write-up of my current thinking at http://members.optusnet.com.au/benjamincarlyle/benjamin/blog/2005/08/07#internetSubscription I think it's clear that even my updated model is flawed. Trying to hold off waiting for a change before sending a response is bound to cause major disruption to (unaware) proxy connection handling. It seems that the HTTP protocol can't be extended safely (ie in a way that doesn't harm older proxies) to support a subscription protocol of the ilk I've been considering. This leads me to think that another protocol needs development to support the concept, or perhaps a new HTTP protocol version developed that would be negotiated away should an older proxy be put in-between client and server. Clients would be forced to fall back to a polling technique if one of the current HTTP versions were used. The reason I've been applying my misguided thinking about end-to-end connections from client to server is in the vain hope of building something that could work now, rather than having to convince everyone that this kind of subscription is a good idea before it starts to work at all. I believe that the kind of subscription I have in mind could work effectively with a new generation of subscription-aware proxies which could return the subscription model to one of hops rather than end-to-end connections. A client may request subscrpition to a URL of a proxy, and the proxy could return data relating to the subscription directly from its cache. The cache itself could be updated via a connection to the next proxy or to the origin server. To my mind the current infrastructure of the web would be suitable for an Internet-scale subscription model, but I'm losing hope that there is a harmless interim step that can be taken to get there. Without that working prototype I believe it will be much harder to achieve the overall objective, from the political perspective. Oh, well. Perhaps I'll go back to working on small-scale trial implementations for the time being... Thanks for your input. -- Benjamin Carlyle <benjamincarlyle@xxxxxxxxxxxxxxx>