On 10/03/2012 3:27 a.m., David Touzeau wrote:
Thanks Amos
to answer :
I have many space left on the disk:
/dev/sda1 452G 202G 228G 48% /
"This is not about total free space on disk. It is about free space in
the small part of disk you have configured Squid to use as cache. "
Sure but /var/cache is stored on / partition
There is only one partition that allowing to use 400G so there is a
minimal of 200GB free
What did you recommend ? increase all caches in the configuration file ?
I was just pointing out that you have configured Squid to use no more
than 17GB of *cache*. So the disk having 200GB free is not relevant to
how full the cache is.
To make an analogy, using a 56Kbps modem on a broadband enabled phone
line is not going to get you Mbps speeds. You get at most the maximum of
what the modem is capable.
"From the above it appears that each worker has roughtly 4GB, and they
all share a 1 GB store. ~5GB for each, with a total of *only* 15.6 GB
of disk space permitted to be used. Yet your disk listing earlier said
around 200 GB was used.
This looks a lot like one of the side effects of disk corruption fixed
in 3.2.0.15. Did you have the bug 3441 fixes in your previous Squid? "
I did not know about the fix you mention the latest version was
squid-3.2.0.15-20120302-r11519
Do you recommend cleaning a rebuilding caches ?
Um. I must be confused about what you were saying then.
Were you actually running squid-3.2.0.15 (or one of the dated
bundles) previously when this "too big" cache was created?
*If* you were running anything older than 3.2.0.15 or 3.1.19 release
bundles, then you need to erase the swap.state files from each cache_dir
(or the whole thing) while Squid is shitdown. Then start Squid. You will
get one 'DIRTY' rebuild, possibly with erasures, then things should work
again fine.
If you were running 3.2.0.15 while that cache was created, something
else must be going on. I'm not sure what though.
Amos
Le 09/03/2012 13:35, Amos Jeffries a écrit :
On 10/03/2012 12:28 a.m., David Touzeau wrote:
Dear
I have upgraded my squid 3.2.0.15 to the squid 3.2.0.16
My server have a load between 4 to 10
This load will be a side effect of the erasures underway. See below...
ps aux
squid 10495 69.2 0.8 878308 34624 ? Dl 14:41 21:26
(squid-3) -sYC -f /etc/squid3/squid.conf
squid 10496 19.5 0.8 877012 36300 ? Sl 14:41 6:02
(squid-1) -sYC -f /etc/squid3/squid.conf
root 16870 0.0 0.0 849032 2296 ? Ss 13:38 0:00
/usr/sbin/squid -sYC -f /etc/squid3/squid.conf
squid 16872 0.0 0.3 853476 12624 ? S 13:38 0:03
(squid-coord-5) -sYC -f /etc/squid3/squid.conf
squid 17707 22.4 0.9 879772 39120 ? Sl 14:43 6:21
(squid-2) -sYC -f /etc/squid3/squid.conf
squid 26988 0.3 0.7 864476 30760 ? S 15:10 0:00
(squid-4) -sYC -f /etc/squid3/squid.conf
In cache.log there is many events :
2012/03/09 14:38:15 kid3| WARNING: Disk space over limit:
220212628.00 KB > 5120000 KB
2012/03/09 14:38:26 kid3| WARNING: Disk space over limit:
222510356.00 KB > 5120000 KB
2012/03/09 14:38:37 kid3| WARNING: Disk space over limit:
225427248.00 KB > 5120000 KB
What does it means ?
It means you have a cache_dir configured for 5000 MB of space.
Something has made Squid worker #3 identify that it has over 210 GB
of data on disk.
Resulting in urgent purging files to make room for new traffic. You
can see that in store.log below.
I have many space left on the disk:
/dev/sda1 452G 202G 228G 48% /
This is not about total free space on disk. It is about free space in
the small part of disk you have configured Squid to use as cache.
The store.log is increased to more than 10Go with these events :
1331289752.882 RELEASE -1 FFFFFFFF
AEB65290D03E08DD782A337A15C479A4 ? ? ? ?
?/? ?/? ? ?
1331289752.882 RELEASE -1 FFFFFFFF
535E450BD0DE1D710ADC738CE0E08FF1 ? ? ? ?
?/? ?/? ? ?
<snip>
Here it is my caches configuration file :
#--------- Multiple cpus --
workers 4
if ${process_number} = 1
cache_dir aufs /var/cache/squid2-1 4000 128 512
endif
if ${process_number} = 2
cache_dir aufs /var/cache/squid2-2 4000 128 512
endif
if ${process_number} = 3
cache_dir aufs /var/cache/squid2-3 4000 128 512
endif
if ${process_number} = 4
cache_dir aufs /var/cache/squid2-4 4000 128 512
endif
#------------------
cache_dir aufs /var/cache/squid 1000 16 256
# --------- OTHER CACHES
From the above it appears that each worker has roughtly 4GB, and they
all share a 1 GB store. ~5GB for each, with a total of *only* 15.6 GB
of disk space permitted to be used. Yet your disk listing earlier
said around 200 GB was used.
This looks a lot like one of the side effects of disk corruption
fixed in 3.2.0.15. Did you have the bug 3441 fixes in your previous
Squid?
Amos