Please set your mailer to wrap lines. Around 72 is usually good. On 14/06/11 02:29, Russell M. Allen wrote:
Is it possible to configure Squid to delay delivery of the http response, without constraining the transmission rate? I think this is effectively a Delay Pool which meters by requests per second instead of Bytes per second. For example, client A is watching a streaming video, and as a result he is making a series of serial http requests for byte offsets in the target resource. Each request happens to corresponds to 10 seconds of video, but the number of bytes composing that 10 second chunk varies due to compression within the video. (That's why we can't use byte based metering.) * t0 - the first request arrives, is proxied to origin server, and response is ready. * t0 - the response is sent immediately since this is the first request in the 'pool'. Response begins at t0 and completes at t1. 450kB sent at 450kB/s. * t1 - second request received, is proxied to origin server, and response is ready. * t1 - response is placed into delay pool for 9 seconds (assume pool configured at 1 request/10 seconds) because the last request was at t0, so next request can't return until t10. * t10 - second request is responded to, beginning at t10 and ending at t12. 1 MB sent at 500 kB/s. * t15 - Third request, proxied to origin, response ready. * t15 - response put into delay pool until t20. (last request time + configured delay) * t20 - response sent. 750 kB at 3 MB/s. Response completes at t20.25. The objective is to control delivery of streaming media on the server side so that we deliver just in time to maintain a continuous playback. We do not want to deliver faster because we are billing for content watched. Normal streaming clients will buffer greedily, and we cannot determine seconds watched vs. seconds buffered. While the existing delay pool seemed appealing at first, the variable bitrate nature of the responses makes it impossible to limit based on bytes/second. Additionally, the clients are measuring the reply transmission rate (size of response over time between first byte and last byte of response) and using that as the driver for video quality decisions (ie, given enough bandwidth, the client will upgrade to a higher bitrate of the video... "adaptive streaming".) Thus, we cannot artificially constrain the clients bandwidth. I would love to hear some thoughts from the experts. Is this possible to configure or will it require development? If development, then can anyone give me a shot in the dark guess about the level of effort? Am I looking in the wrong place / Is this best solved another other way? Thank you for your time and dedication!! -Russell Allen
Yes it is possible to configure this. An external_acl_type helper that pauses for your desired delay period before OK'ing the request will do exactly that. If you find this works and need it more efficient an ACL for Squid-3 will not be hard to create.
You may also want to look at chunked encoding on a long-polled connection. The middleware and server will inform you up front with a 100-continue response if chunked is available end-to-end or not.
Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.8 and 3.1.12.2