Search squid archive

Re: RFE - HTTP 1.1 RANGES

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Linda W wrote:
If I missed this, please let me know, but I was wondering why
HTTP 1.1 wasn't on the list on the roadmap?  I don't know all
the details, but compression and RANGES are two that could
speed up web usage for the average user.

Not sure which roadmap you are looking at. HTTP/1.1 is on the TODO list of Squid-3.
http://wiki.squid-cache.org/RoadMap/Squid3#TODO
http://wiki.squid-cache.org/Features/HTTP11

A lot of the little details have been done quietly. Such as fixing up Date: headers and sending the right error status codes, handing large values or syntax in certain headers correctly.

I've started working on some experiments towards Expect-100 support recently, but its early days on that.


Ranges, it seems to me, could be kept in a binary-sized linked-list of chunks corresponding to the content downloaded
'so far'.  It might be that the content stored might never get
filled up before being re-used, but at least it would allow the storage and potential reuse of a 'range' when rounded to the nearest 'min-chunk-size' (~4K?, 8K? user config item). Could
even use seeks over blank areas rather than initializing it to
some value to encourage sparse file usage on file systems that
support such files. Would it be that difficult to at minimum, view a byte ranged file as a sequence of 4K files where squid starts downloading
them from the nearest 'pseudo file' to store it on disk.  That
way users could benefit from byte ranges for update servers without
being forced to download the whole file -- and **especially**..where
I really notice this is trying to 'continue' an aborted download --
doesn't work with squid, the continuation process (example, 'wget')
tries to use a byte range to start from. Either that or allow byte-ranged requests to pass through w/o being cached?

Nice ideas. The range support AFAIK has always been stuck up on detail of storing ranges.

Before things like that can be done with the meta data Squid needs to be made able to store the headers and some meta separately to the body object. I'm not sure where work towards that it's at, other than its still blocking good range handling.

A storage engine matching that spec above it would be very welcome. It would need to also account for ranges with unknown object length (aborted requests are a special case of that). But that is minor compared to getting the list of ranges indexed in the meta data.



It sounds like some (most?) of the compression support might already
be there?

De-compression support is fully there for 3.0+. But Squid does not yet compress transfer-encoded chunks.

For 3.1+ a third-party eCAP module exists for gzip/deflate compression in-transit of body content. That can use either eCAP or ICAP to do the compression.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
  Current Beta Squid 3.1.0.15

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux