Search squid archive

Re: Range header is a hit ratio killer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Simon,

I do not know the plans but it will depend on couple things which can fit to one case but not the other.
The assumption that we can fetch any part of the object is the first step for any solution what so ever.
However it is not guaranteed that each request will be public.

The idea of static chunks exists for many years in many applications and in many forms and YouTube videos player uses a similar idea. Google video clients and servers uses a bytes "range" request in the url rather then in the request header.
Technically it would be possible to implement such an idea but it has it's own cost.
Eventually if the file is indeed public(what squid was designed to cache) then it might not be of a big problem.
Depends on the target sites a the solution will be different.
Before deciding on a specific solution my preferred path is to analyze the requests.

By observing amplified traffic of 500% to  clients side you mean that the incoming traffic to the server is 500% compared to the output towards the clients?
If so I think that there might be a "smarter" solution then 206 range offset limit.
The old method of prefetching works pretty good in many cases. From what you describe it might have better luck then the plain "fetch everything on the wire in real time".

I cannot guarantee that prefetching is the right solution for you but I think that a case like this deserves couple eyes to understand if there is a right way to handle the situation.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: eliezer@xxxxxxxxxxxx


-----Original Message-----
From: squid-users [mailto:squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx] On Behalf Of k simon
Sent: Saturday, August 6, 2016 12:56 PM
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject:  Range header is a hit ratio killer

Hi,list,
   Code 206 is the most pain for our forwed proxy. Squid use “range_offset_limit” to process byte-range request. when set it "none", it has 2 wellknown issue:
1.  boost the traffic on the server side, we observed it's amplified 500% compared to clients side on our box.
2.  it's always failed on a lossy link, and squid refetched it again and again.
   I've noticed that nginx have supported "byte-range cacheing" since
1.9.8  by Module ngx_http_slice_module officially.
(1. 
http://nginx.org/en/docs/http/ngx_http_slice_module.html?_ga=1.140845234.106894549.1470474534
2. https://www.nginx.com/resources/admin-guide/content-caching/  ).
   The solution is not perfect, but it's really more usable than "range_offset_limit". The secret it's use a fixed size object replaced the whole file, and we can alter the request range offset and passed it to server; perhaps forward the origin range offset and cache a part of the object with a range key is a better idea. And squid should know how to make up those object and process the request with range header.
   And with a fixed size object to cache it may benefits to disk IO. 
Sounds it's similar like big-rock db concept, though I've not got successed with rock on FreeBSD nor ubuntu box.
   Does squid has some plan to support this method or have another solution?



Regards
Simon
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux