Search squid archive

RE: Chunked and Range header support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chris,

Thanks for the detailed reply about range headers. It was indeed informative on how to force squid to skip the range headers before sending it to the upstream.

Do you have any idea on the chunked transfer encoding? I think the range offset header will be present in each data set sent by the squid to the client. But I am not sure of the transfer encoding header. Will it be set to every chunked data too? 

Thanks.

Regards,
Anita

-----Original Message-----
From: Chris Bennett [mailto:chris@xxxxxxxxxxxxx] 
Sent: 04 July 2013 17:00
To: Anita Sivakumar (WT01 - GMT-Telecom Equipment)
Cc: squid-users@xxxxxxxxxxxxxxx
Subject: Re:  Chunked and Range header support

Hi Anita,

> 2)  Range headers - from my understanding, it looks like they use this for
> video streaming.. it looks like the client can request a part of the object
> body to be sent alone to him. Is it correct? In this case, if multiple
> ranges are requested, is it sent separately or in a consolidated manner?

I'm not sure what your use case is, but I've been playing with youtube
caching lately, and range headers play a part in trying to cache
client requests.

Squid won't cache range header requests for specific byte ranges.
However, squid can accept range headers from client and discard them
when sending upstream.

  range_offset_limit -1 [<optional_acl>]

will do the job.  I use an ACL to only apply it to specific domains I
want the behaviour for.

Squid will serve the client the correct bytes, but by discarding the
range headers for the upstream retrieval, it will cache the whole
object and subsequent range requests for the cached object will result
in a hit.

Parallel simultaneous requests to the same object with a range header
will likely result in parallel, full retrieval of the whole file
though (I say likely as I havn't tested but suspect that will be the
case).

A related behaviour on youtube specifically (and possibly others) is
the use of '&range=X-Y' URL parameters instead of range header
requests.  I've noticed this more on web-browsers on PCs, whereas I've
seen the range header requests on Apple IOS mobile platforms.

There have been some clever tricks using storeurlrewrite or storeid to
include the range bytes in the key of the object stored, so without
the use of range_offset_limit, squid can store an object per unique
client range request & provide a hit for subsequent requests.  My
testing has indicated this is unreliable as the byte-ranges tend to be
dynamic based on the clients current bitrates so off-by-one range
requests result in a lot of duplication in the cache.

You can also utilise an ICAP server to do even funkier stuff like
mould range URL parameters into range header requests so the different
client behaviours share the same cache objects.

Eliezer Croitoru (on this list) is the expert and has been a huge help
in getting me to where my understanding is.

I don't know if this is way too much information for what you are
looking at, but it's fresh in my memory so I thought I'd dump it all
out and you can pick out anything of use :)

Regards,

Chris

Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. 

www.wipro.com





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux