On 19/11/10 00:47, MichaÅ Prokopiuk wrote:
Dnia Czwartek, 18.11.2010 o 11:18 Amos Jeffries napisaÅ:
Okay. The obvious config reasons for slowness are gone. So the next
thing is to find out exactly what Squid is doing.
* you can check with strace to see whats the thing taking out the CPU.
Not possible, when I run strace load groving wery fast, and i can't do anything.
Pity. That can pinpoint straight to the CPU hog function most times.
Has the box started using swap memory space now? That can drastically
slow downs Squids index lookups or buffering of information relayed.
No, on server are about 2 gb free ram (cached).
Is RAID still running trying to duplicate the random cache IO actions?
No, i remove raid, and create two cache_dir:
cache_dir aufs /var/spool/squid-cache/cache 10000 60 100
cache_dir aufs /var/spool/squid-cache/sda6/cache 10000 60 100
Have you noticed if the problem appears and lasts for around 2 hours,
then goes away? that would indicate your DNS TTL overrides forcing squid
to try the wrong IPs for some site. CDN hosted sites (eg akamai and
youtube) can change their IPs with no notice when there are routing
problems to work around.
The problem appears about 30 min - 1 h. I have local dnscache.
I have one new information - when i decrease maximum_object_size from 100M to 20 M, squid work for two days without any problem, and then squid again get a lot of cpu. With 100M maximum_object_size squit work stable only for first fiew hours.
Aha! a clue!
So anything show up if you grep your access.log for requests bigger than
20 MB?
Can you increase in steps and see if there is a fixed limit that causes it?
"debug_options ALL,0 20,2" may have some indication of whats going on
with objects storage.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.9
Beta testers wanted for 3.2.0.3