Search squid archive

Delay Pool metered by requests/second ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is it possible to configure Squid to delay delivery of the http response, without constraining the transmission rate?  I think this is effectively a Delay Pool which meters by requests per second instead of Bytes per second.

For example, client A is watching a streaming video, and as a result he is making a series of serial http requests for byte offsets in the target resource.  Each request happens to corresponds to 10 seconds of video, but the number of bytes composing that 10 second chunk varies due to compression within the video.  (That's why we can't use byte based metering.)
*  t0 - the first request arrives, is proxied to origin server, and response is ready.
*  t0 - the response is sent immediately since this is the first request in the 'pool'.  Response begins at t0 and completes at t1.  450kB sent at 450kB/s.
*  t1 - second request received, is proxied to origin server, and response is ready.
*  t1 - response is placed into delay pool for 9 seconds (assume pool configured at 1 request/10 seconds) because the last request was at t0, so next request can't return until t10.
*  t10 - second request is responded to, beginning at t10 and ending at t12.  1 MB sent at 500 kB/s.
*  t15 - Third request, proxied to origin, response ready.
*  t15 - response put into delay pool until t20.  (last request time + configured delay)
*  t20 - response sent. 750 kB at 3 MB/s.  Response completes at t20.25.

The objective is to control delivery of streaming media on the server side so that we deliver just in time to maintain a continuous playback.  We do not want to deliver faster because we are billing for content watched.  Normal streaming clients will buffer greedily, and we cannot determine seconds watched vs. seconds buffered.

While the existing delay pool seemed appealing at first, the variable bitrate nature of the responses makes it impossible to limit based on bytes/second.  Additionally, the clients are measuring the reply transmission rate (size of response over time between first byte and last byte of response) and using that as the driver for video quality decisions (ie, given enough bandwidth, the client will upgrade to a higher bitrate of the video... "adaptive streaming".)  Thus, we cannot artificially constrain the clients bandwidth.


I would love to hear some thoughts from the experts.  Is this possible to configure or will it require development?  If development, then can anyone give me a shot in the dark guess about the level of effort?  Am I looking in the wrong place / Is this best solved another other way?

Thank you for your time and dedication!!
-Russell Allen



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux