Hi Eliezer, thanks for your reply.
As you've suggested, I removed all cache_dirs to verify if the rest was
stable/fast and raised cache_mem to 10GB. I didn't disable access logs
because we really need it..
And it is super fast, I can't even notice it using only ONE core.. (and
it isn't running as smp)
%Cpu0 : 0,7 us, 1,0 sy, 0,0 ni, 98,3 id, 0,0 wa, 0,0 hi, 0,0 si,
0,0 st
%Cpu1 : 8,8 us, 5,6 sy, 0,0 ni, 76,1 id, 0,0 wa, 0,0 hi, 9,5 si,
0,0 st
%Cpu2 : 8,7 us, 4,0 sy, 0,0 ni, 83,3 id, 0,0 wa, 0,0 hi, 4,0 si,
0,0 st
%Cpu3 : 5,4 us, 3,4 sy, 0,0 ni, 86,2 id, 0,0 wa, 0,0 hi, 5,0 si,
0,0 st
%Cpu4 : 7,8 us, 5,1 sy, 0,0 ni, 73,5 id, 6,8 wa, 0,0 hi, 6,8 si,
0,0 st
%Cpu5 : 1,0 us, 1,0 sy, 0,0 ni, 98,0 id, 0,0 wa, 0,0 hi, 0,0 si,
0,0 st
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11604 proxy 20 0 11,6g 11g 5232 S 48,4 72,2 72:31.24 squid
Start Time: Wed, 24 Feb 2016 15:38:59 GMT
Current Time: Wed, 24 Feb 2016 19:18:30 GMT
Connection information for squid:
Number of clients accessing cache: 1433
Number of HTTP requests received: 2532800
Average HTTP requests per minute since start: 11538.5
Select loop called: 68763019 times, 0.192 ms avg
Storage Mem size: 9874500 KB
Storage Mem capacity: 94.2% used, 5.8% free
I don't think I had a bottleneck on I/O itself, maybe the hash/search of
cache indexes was too much for a single thread?
Best Regards,
--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751
Em 23/02/2016 18:36, Eliezer Croitoru escreveu:
Hey,
Some of the emails was probably off-list from some reason so
responding here.
Since you are having some issues with the current way that the proxy
works since it gets to 100% CPU and probably your clients\users
suffering from an issue I would suggest to try another approach to get
couple clear things into our\yout sight.
Stop using disk cache as a starter and make sure that the current
basic CPU+RAM handles the traffic properly. Only after you somehow
made sure that the proxy handles something right try to see what can
be done with any form of cache_dir.
Since you have plenty of RAM and CORES see if *without* any cache_dir
you are having any CPU issues.
If you still have then I would suggest to do two things simultaneously:
- disable access logs
- upper the workers number from the default 1 to more
If when the access logs are disabled and the cores number was
bumped-up to the maximum you are probably having the wrong machine for
the task.
If in some state that the access logs disabled and the number of cores
was higher then 1 and not up to the maximum of the machine you hare
having a somehow balanced CPU percentages you still have a chance to
match the hardware to the task.
Then the next step would be to enabled the access logs and see how the
machine holds only this.
The above method is the basic way to make sure you are on the right
track.
If you need more advice just respond to email.
All The Bests,
Eliezer
On 23/02/2016 21:11, Heiler Bemerguy wrote:
Thanks Alex.
We have a simple cache_dir config like this, with no "workers" defined:
cache_dir rock /cache2 80000 min-size=0 max-size=32767
cache_dir aufs /cache 320000 96 256 min-size=32768
And we are suffering from a 100% CPU use by a single squid thread. We
have lots of ram, cores and disk space.. but also too many users:
Number of clients accessing cache: 1634
Number of HTTP requests received: 3276691
Average HTTP requests per minute since start: 12807.1
Select loop called: 60353401 times, 22.017 ms avg
Getting rid of this big aufs and spreading to many rock stores will
improve things here? I've already shrunk the acls and
patterns/regexes etc
Best Regards,
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users