Search squid archive

Re: Re: cache problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/06/2014 9:21 a.m., babajaga wrote:
>>
>> What is your
>> "maximum_object_size_in_memory"
> 
> The default value: 512 kB
> 
> 
> Increase above 82 MB. Usually squid 2.7 keeps only in-transit objects in
> memory, then in memory_cache, until swapped out later on. So this might
> inhibit the swap-out to disk, because not being cached before in memory. 

It should have a disk file opened for it and filled as it arrives.


I am thinking the described behavioru matches what Squid older than 3.5
do if asked to download multiple copies of a file before the first copy
has been cached and made available for transfer...

Each time Squid is required to download a file it begins to do so. If a
second request arrives before the first is finished it is also a MISS
and new download started.

When the first request finishes its response becomes available for
future request to HIT on.

When the second (MISS) request finishes download it *replaces* the first
one. This is where things can get a litte strange...

 1) if the second request is a full object to store, then it just takes
over as source copy for future HITs.

 2) if the second request was a range (not cacheable) then it also gets
released because range responses are not cacheable by Squid yet.


#2 may occur on Range request replies, or an download aborted situations
(incompete object "range") which is what is being described by Karl.

It is correct for Squid to drop the cached file as the original copy is
now known to have been updated/reaplced by a copy which it cannot cache.


As to solving, I suggest you try setting:
 range_offset_limit 100 MB
 quick_abort_min -1


Also, "collapsed_forwarding on" may help reduce the duplicate tansfers
being made in the first place.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux