Search squid archive

Re: squid not cachig big files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/02/2014 3:23 a.m., Михаил Габеркорн wrote:
> Using FreeBSD 10.0-RELEASE amd64 and squid 3.3.11 via port
> /usr/ports/www/squid33 on ZFS filesystem.
> 
> 
> 1392376616.882 SWAPOUT 00 000002A6 D27E641E51B22F49E25D124338E2D388
> 200 1392390889 1384782870        -1 video/mp4 245329905/245329905 GET
> http://fs171.www.ex.ua/load/e291494a3e02be
> e12326fbf54bee8c6e/82763797/Xonotic%200.7%20-%20Mossepo%20(POV)%20vs%20ZeRoQL%20-%20Silent%20Siege.mp4
> 
> 
> 1392376981.320 RELEASE -1 FFFFFFFF 487311FC22CB4BF63943CD89B8516174
> 200 1392391193 1391569953        -1 video/x-msvideo
> 843638784/843638784 GET http://fs75.www.ex.ua/load/571c72f6f
> b348a1951a6a02d01c9205a/93458980/1x01%2602%20-%20Book%20of%20the%20Sword.avi
> 
> 

These store.log lines only say that the existing cache entry was
removed. It says nothing about *why*.
 For example, it is likely because a new version of that object was
added to cache in another location.

Do you have the access.log lines for these same URLs?


> thanks for comments on the options refresh and cache_dir.
> 
> but as I understand it, it can not be a reason nostore items into cache?
> 

They could be.

* If the refresh_pattern parameters cause the storage timespan to be a
negavtive value they are marked as already expired and coudl be erased
at any time.


* If the cache_dir size 32-bit wraps to make Squid believe the cache
total size is ~200MB Squid will limit objects going there to that value.

* If the cache fills up either to its 2^27 object limit or its disk
capacity, then no new objects can be stored there without erasing some
exiting object(s).

* These last two could act together at some non-200MB size to cause
erasures.

Amos





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux