Search squid archive

Re: Squid3: 100 % CPU load during object caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you share the relevant squid.conf settings? Just to reproduce..

I have a dedicated testing server here which I can test the issue on.
8GB archive which might be an ISO and can be cached on AUFS\UFS and LARGE ROCK cache types.

I am pretty sure that the maximum cache object size is one thing to change and waht more?

From What I understand it should not be different for 2GB cached archive and to 8 GB cached archive.
I have a local copy of centos 7 ISO which should be a test worthy object.
Anything more you can add to the test subject?

Eliezer

On 22/07/2015 16:24, Jens Offenbach wrote:
I checked the bug you have mentioned and I think I am confronted with the same
issue. I was able to build and test Squid 3.5.6 on Ubuntu 14.04.2 x84_64. I
observed the same behavior. I have tested an 8 GB archive file and I get 100 %
CPU usage and a download rate of nearly 500 KB/sec when the object gets cached.
I have attached strace to the running process, but I killed it after 30 minutes.
The whole takes hours, although we have a 1-GBit ethernet:

Process 4091 attached
Process 4091 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
78.83 2.622879 1 1823951 write
12.29 0.408748 2 228029 2 read
6.18 0.205663 0 912431 1 epoll_wait
2.58 0.085921 0 456020 epoll_ctl
0.09 0.002919 0 6168 brk
0.02 0.000623 2 356 openat
0.01 0.000286 0 712 getdents
0.00 0.000071 1 91 getrusage
0.00 0.000038 0 362 close
0.00 0.000003 2 2 sendto
0.00 0.000001 0 3 1 recvfrom
0.00 0.000000 0 2 open
0.00 0.000000 0 3 stat
0.00 0.000000 0 1 1 rt_sigreturn
0.00 0.000000 0 1 kill
0.00 0.000000 0 4 fcntl
0.00 0.000000 0 2 2 unlink
0.00 0.000000 0 1 getppid
------ ----------- ----------- --------- --------- ----------------
100.00 3.327152 3428139 7 total

Can I do anything that helps to get ride of this problem?


Gesendet: Dienstag, 21. Juli 2015 um 17:37 Uhr
Von: "Amos Jeffries" <squid3@xxxxxxxxxxxxx>
An: "Jens Offenbach" <wolle5050@xxxxxx>, "squid-users@xxxxxxxxxxxxxxxxxxxxx"
<squid-users@xxxxxxxxxxxxxxxxxxxxx>
Betreff: Re: Aw: Re:  Squid3: 100 % CPU load during object caching
On 22/07/2015 12:31 a.m., Jens Offenbach wrote:
  > Thank you very much for your detailed explainations. We want to use Squid in
  > order to accelerate our automated software setup processes via Puppet. Actually
  > Squid will host only a very short amount of large objects (10-20). Its purpose
  > is not to cache web traffic or little objects.

Ah, Squid does not "host", it caches. The difference may seem trivial at
first glance but it is the critical factor between whether a proxy or a
local web server is the best tool for the job.

  From my own experiences with Puppet, yes Squid is the right tool. But
only because the Puppet server was using relatively slow python code to
generate objects and not doing server-side caching on its own. If that
situation has changed in recent years then Squids usefulness will also
have changed.


  > The hit-ratio for all the hosted
  > objects will be very high, because most of our VMs require the same software
stack.
  > I will update mit config regarding to your comments! Thanks a lot!
  > But actually I have still no idea, why the download rates are so unsatisfying.
  > We are sill in the test phase. We have only one client that requests a large
  > object from Squid and the transfer rates are lower than 1 MB/sec during cache
  > build-up without any form of concurrency. Have vou got an idea what could be the
  > source of the problem here? Why causes the Squid process 100 % CPU usage.

I did not see any config causing the known 100% CPU bugs to be
encountered in your case (eg. HTTPS going through delay pools guarantees
100% CPU). Which leads me to think its probably related to memory
shuffling. (<http://bugs.squid-cache.org/show_bug.cgi?id=3189
<https://3c.gmx.net/mail/client/dereferrer?redirectUrl=http%3A%2F%2Fbugs.squid-cache.org%2Fshow_bug.cgi%3Fid%3D3189>>
appears
to be the same and still unidentified)

As for speed, if the CPU is maxed out by one particular action Squid
wont have time for much other work. So things go slow.

On the other hand Squid is also optimized for relatively high traffic
usage. For very small client counts (such as under-10) it is effectively
running in idle mode 99% of the time. The I/O event loop starts pausing
for 10ms blocks waiting to see if some more useful amount of work can be
done at the end of the wait. That can lead to apparent network slowdown
as TCP gets up to 10ms delay per packet. But that should not be visible
in CPU numbers.


That said, 1 client can still max out Squid CPU and/or NIC throughput
capacity on a single request if its pushing/pulling packets fast enough.


If you can attach the strace tool to Squid when its consuming the CPU
there might be some better hints about where to look.


Cheers
Amos



_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users



_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux