Hey Yuri, The issue is not money alone… To my understanding Squid is written in C++ and is very complex, due to this it requires more then basic level knowledge. However I can clearly say that it's not a big issue to use current squid APIs\Interfaces(ICAP\ECAP) to implement a solution which will act like the nginx "module". I do not know how long it would take or how much it will cost since it requires time… This time is required for: - Learning\Relearning - Identifying and predicting the different cases - Basic testing for the different cases - Implementing a basic structure - Testing - (In a loop and\or couple trees…) From my point of view compared to "ransom" or any similar idea, anyone that will write any piece of software to implement this specific idea should be able to take on his shoulders more then only this but more then just this. And just to illustrate, imagine that some nice guy pops into Boing or RedHat offices and will leave a DiskOnKey at the front desk with a note "This flash drive contains an idea that will bring you lots of money"(not saying the current idea itself is bad or wrong..). What would these companies do? Will they put a team of engineers in a second? I do believe that they are not "hot headed" enough to act in a second. I received a link couple years ago from Amos for an e-cap module: https://github.com/creamy/ecap-mongo Which does couple very interesting things but, despite to the fact that I learned to program in C and C++ I couldn't understand and\or implement a Store API which could be used for\by squid. However I implemented this: Windows Updates a Caching Stub zone[ http://www1.ngtech.co.il/wpe/?page_id=301 ] And while implementing the idea one of the main things I noticed is that trying to "catch" all traffic into disk is the wrong way to define a goal. Indeed it can be written to be done "automatically" but I will ask: If you have a specific targeted site it's one thing but trying to catch them all is kind of like tying your feet with a rope to a door and then shove\slam the door to the other direction. Thanks, Eliezer From: squid-users [mailto:squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx] On Behalf Of Yuri Voinov
wrote: >> Hi,list, >> Code 206 is the most pain for our forwed proxy. Squid use >> “range_offset_limit” to process byte-range request. when set it "none", >> it has 2 wellknown issue: >> 1. boost the traffic on the server side, we observed it's amplified >> 500% compared to clients side on our box. > > To which the answer currently is to see if enabling collapsed_forwarding > works okay for your needs. > >> 2. it's always failed on a lossy link, and squid refetched it again and >> again. >> I've noticed that nginx have supported "byte-range cacheing" since >> 1.9.8 by Module ngx_http_slice_module officially. >> (1. >> http://nginx.org/en/docs/http/ngx_http_slice_module.html?_ga=1.140845234.106894549.1470474534 >> > > So? what relevance does other software features have to Squid behaviour? > > > > ... to be fair the storage code in Squid is a bit hairy in places. So > paying for it to be done is unlikely to be cheap. But still, waiting > wont fix the problem. We nearly go there in Squid-2.7, but the > experiment there is not able to completely port across to Squid-3 and > had some important problems anyway. > > >> 2. https://www.nginx.com/resources/admin-guide/content-caching/ ). >> The solution is not perfect, but it's really more usable than >> "range_offset_limit". The secret it's use a fixed size object replaced >> the whole file, and we can alter the request range offset and passed it >> to server; > > Ah, thats what range_offset_limit does today. Updates the server request > to say "deliver all of it" and stores the response in a file the size of > the whole expected response the server informs will be arriving. > > The reason you are seeing that 500% increase in bandwidth is that > multiple Range requests arrive while the initial part of the first > response is still arriving back to Squid, so 5 of them get sent through > to the server. When that first one finishes, its object becomes > available for use as a HIT and followup Range requests get bits of it > (so you dont see 600% -> millions of % bandwidth increase). > > collapsed_fowarding alters this by letting the first response be used by > other requests while it is still incomplete. But YMMV regarding the > savings and CF affects all traffic, so it may cause behaviours you dont > want on other types of request. Worth a try though. > > >> perhaps forward the origin range offset and cache a part of >> the object with a range key is a better idea. >> And squid should know how >> to make up those object and process the request with range header. >> And with a fixed size object to cache it may benefits to disk IO. >> Sounds it's similar like big-rock db concept, though I've not got >> successed with rock on FreeBSD nor ubuntu box. >> Does squid has some plan to support this method or have another solution? >> > > squid is software. It doesn't have its own plans (at least I hope not). > > I'm not aware of any plans specifically to add Range caching any time > soon. Ideas for how to do it get thrown around in squid-dev a couple of > times a year, so lots of ideas but so far nothing concrete has come out > of it. Yes rock and/or memory caches are looking like the most easily > adapted cache types to enable storing partial objects in, someone still > has to do the actual coding work though. > > Amos > > _______________________________________________ > squid-users mailing list > squid-users@xxxxxxxxxxxxxxxxxxxxx > http://lists.squid-cache.org/listinfo/squid-users |
_______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users