@Marcus: I am not sure what exactly causes the problems, but could you please make a test with these two different settings: cache_mem 4 GB maximum_object_size_in_memory 1 GB I think you will observe the behavior, that I was confronted with. The bad download rates of 500 KB/sec are gone, when I used the following settings: cache_mem 256 MB maximum_object_size_in_memory 16 MB I think Amos has an idea what seems to be the source of the problem: http://lists.squid-cache.org/pipermail/squid-users/2015-July/004728.html Regards, Jens Gesendet: Freitag, 24. Juli 2015 um 14:33 Uhr Von: "Marcus Kool" <marcus.kool@xxxxxxxxxxxxxxx> An: "Jens Offenbach" <wolle5050@xxxxxx>, squid-users@xxxxxxxxxxxxxxxxxxxxx Betreff: Re: Squid3: 100 % CPU load during object caching On 07/24/2015 03:25 AM, Jens Offenbach wrote: > I have made a quick test of Squid 3.3.8 on Ubuntu 15.04 and I get the same problem: 100 % CPU usage, 500 KB/sec download rate. > > > Gesendet: Freitag, 24. Juli 2015 um 07:54 Uhr > Von: "Jens Offenbach" <wolle5050@xxxxxx> > An: "Marcus Kool" <marcus.kool@xxxxxxxxxxxxxxx>, "Eliezer Croitoru" <eliezer@xxxxxxxxxxxx>, "Amos Jeffries" <squid3@xxxxxxxxxxxxx>, squid-users@xxxxxxxxxxxxxxxxxxxxx > Betreff: Re: Squid3: 100 % CPU load during object caching > It is not easy for me, but I have tested Squid 3.3.8 from the Ubuntu packaging on a "real" physical infrastructure. I get the same results on the physical machine (1x Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz, 32 GB RAM, 1 TB disk) where Squid is running: 100 % CPU usage, 500 KB/sec download rate. All machines are idle and we have 1 GBit ethernet. > > The strace log from the physical test scenario can be found here, but I think it does not differ from the "virtual" test scenario: > http://wikisend.com/download/293856/squid.strace2 > > @Marcus: > Have you verified that the file does not fit into memory and gets cached on disk? On which OS is Squid running? What are your build options of Squid (squid -v)? Is it possible that the issue is not part of 3.4.12? Do we have a regression? I screwed up earlier since the maximum_object_size was too low for the test with a 1 GB file and did a new test. The system has 64 GB memory and for sure the entire file is in the file system cache. The disk system is HW RAID-1 with 1 GB cache. The OS is Linux 3.10, CentOS 7 latest patches. New test: test system: 1 CPU with 4 cores/8 threads @ 3.7 GHz, 64 GB memory, AUFS, 1 Gbit pipe, 500 mbit guaranteed with Squid 3.4.12 : 1st download starts with 90 MB/sec and halfway drops to 30 MB/sec. My guess is that the file system cache got stressed and slowed things down. 2nd cached download with 190 MB/sec sustained and 120% CPU time. With Squid 3.5.6 : 1st download starts with 90 MB/sec sustained and 80% CPU time. 2nd cached download with 190 MB/sec sustained and 120% CPU time. As a comparison, I did "dd if=test of=test2 bs=4k" which uses 100% CPU time and has a throughput of 1200 MB/sec. With bs=16k the throughput is 1300 MB/sec and with bs=64k the throughput is 1400 MB/sec. relevant parameters : read_ahead_gap 64 KB cache_mem 256 MB maximum_object_size_in_memory 8 MB maximum_object_size 8000 MB cache_dir aufs /local/squid34/cache 10000 32 256 cache_swap_low 92 cache_swap_high 93 # also ICAP daemon and URL rewriter configured debug_options ALL,1 93,3 61,9 configure options: '--prefix=/local/squid35' '--disable-ipv6' '--enable-fd-config' '--with-maxfd=3200' '--enable-async-io=64' '--enable-storeio=aufs' '--with-pthreads' '--enable-removal-policies=lru' '--disable-auto-locale' '--enable-default-err-language=English' '--enable-err-languages=Dutch English Portuguese' '--with-openssl' '--enable-ssl' '--enable-ssl-crtd' '--enable-cachemgr-hostname=localhost' '--enable-cache-digests' '--enable-follow-x-forwarded-for' '--enable-xmalloc-statistics' '--disable-hostname-checks' '--enable-epoll' '--enable-icap-client' '--enable-useragent-log' '--enable-referer-log' '--enable-stacktraces' '--enable-underscores' '--disable-icmp' '--mandir=/usr/local/share' 'CC=gcc' 'CFLAGS=-g -O2 -Wall -march=native' 'CXXFLAGS=-g -O2 -Wall -march=native' --enable-ltdl-convenience As you can see the cache_mem is small, If Amos finds it useful, I can do another test with a larger cache_mem. Jens, since all your tests have a drop to 500 KB/sec I think the cause is somewhere is the configuration (Squid and/or OS). Marcus > @Amos, Eliezer > Is someone able to reproduce the disk caching effect? > > Regards, > Jens > > > Gesendet: Donnerstag, 23. Juli 2015 um 20:08 Uhr > Von: "Marcus Kool" <marcus.kool@xxxxxxxxxxxxxxx> > An: "Jens Offenbach" <wolle5050@xxxxxx>, "Amos Jeffries" <squid3@xxxxxxxxxxxxx>, "Eliezer Croitoru" <eliezer@xxxxxxxxxxxx>, squid-users@xxxxxxxxxxxxxxxxxxxxx > Betreff: Re: Aw: Re: Squid3: 100 % CPU load during object caching > The strace output shows this loop: > > Squid reads 16K-1 bytes from FD 13 webserver > Squid writes 4 times 4K to FD 17 /var/cache/squid3/00/00/00000000 > Squid writes 4 times 4K to FD 12 browser > > But this loop does not explain the 100% CPU usage... > > Does Squid do a buffer reshuffle when it reads 16K-1 and writes 16K ? > > I did the download test with Squid 3.4.12 AUFS on an idle system with a 500 mbit connection and 1 CPU with 4 cores @ 3.7 GHz. > The first download used 35% of 1 CPU core with a steady download speed of 62 MB/sec. > The second (cached) download used 50% of 1 CPU core with a steady download speed of 87 MB/sec. > I never looked at Squid CPU usage and do not know what is reasonable but it feels high. > > With respect to the 100% CPU issue of Jens, one factor is that Squid runs in a virtual machine. > Squid in a virtual machine cannot be compared with a wget test since Squid allocates a lot of memory that the host must manage. > This is a possible explanation for the fact that you see the performance going down and up. > Can you do the same test on the host (i.e. not inside a VM). > > Marcus > > > > On 07/23/2015 10:39 AM, Jens Offenbach wrote: >> I have attached strace to Squid and waited until the download rate has decreased to 500 KB/sec. >> I used "cache_dir aufs /var/cache/squid3 88894 16 256 max-size=10737418240". >> Here is the download link: >> http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace[http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace][http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace[http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace]] >> I hope it can help you. >> *Gesendet:* Donnerstag, 23. Juli 2015 um 13:29 Uhr >> *Von:* "Marcus Kool" <marcus.kool@xxxxxxxxxxxxxxx> >> *An:* "Jens Offenbach" <wolle5050@xxxxxx>, "Eliezer Croitoru" <eliezer@xxxxxxxxxxxx>, "Amos Jeffries" <squid3@xxxxxxxxxxxxx>, squid-users@xxxxxxxxxxxxxxxxxxxxx >> *Betreff:* Re: Squid3: 100 % CPU load during object caching >> I am not sure if it is relevant, maybe it is: >> >> I am developing an ICAP daemon and after the ICAP server sends a "100 continue" >> Squid sends the object to the ICAP server in small chunks of varying sizes: >> 4095, 5813, 1448, 4344, 1448, 1448, 2896, etc. >> Note that the interval of receiving the chunks is 1/1000th of a second. >> It seems that Squid forwards the object to the ICAP server every time it receives >> one or a few TCP packets. >> >> I have a suspicion that in the scenario of 100% CPU, large #write calls and low throughput a similar thing is happening: >> Squid physically stores a small part of the object many times, i.e. every time one or a few TCP packets arrive. >> >> Amos, is there a debug setting that can confirm/reject this suspicion? >> >> Marcus >> >> >> On 07/23/2015 04:25 AM, Jens Offenbach wrote: >>> A test with ROCK "cache_dir rock /var/cache/squid3 51200" gives very confusing results. >>> >>> I cleared the cache: >>> rm -rf /var/cache/squid3/* >>> squid -z >>> squid >>> http_proxy=http://139.2.57.120:3128/[http://139.2.57.120:3128/][http://139.2.57.120:3128/[http://139.2.57.120:3128/]][http://139.2.57.120:3128/[http://139.2.57.120:3128/][http://139.2.57.120:3128/[http://139.2.57.120:3128/]]] wget http://test-server/freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0.tar >>> >>> The download starts with 10 MB/sec and stays constant for 1 minutes, then it drops gradually to 1 MB/sec and stays there for some time. After 5 minutes the download rate returns back to 10 MB/sec >> very quickly and drops again step-by-step to 1 MB/sec. After 5-6 minutes the download rates rises again to 10 MB/sec and drops again gradually to 1 MB/sec. >>> >>> During caching progress, we have 100 % CPU usage and a disk IO that is corresponds with the download rate. >>> >>> For further investigations I give you my build properties: >>> squid -v >>> Squid Cache: Version 3.5.6 >>> Service Name: squid >>> configure options: '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=/include' '--mandir=/share/man' '--infodir=/share/info' '--sysconfdir=/etc' '--localstatedir=/var' >> '--libexecdir=/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' >> '--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' >> '--enable-underscores' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB' '--enable-auth-digest=file,LDAP' >> '--enable-auth-negotiate=kerberos,wrapper' '--enable-auth-ntlm=fake,smb_lm' '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group' >> '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-e >> s >> i' '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation' '--with-swapdir=/var/cache/squid3' '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' >> '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector >> --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE >> -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security' >>> >>> >>> Gesendet: Mittwoch, 22. Juli 2015 um 21:47 Uhr >>> Von: "Eliezer Croitoru" <eliezer@xxxxxxxxxxxx> >>> An: squid-users@xxxxxxxxxxxxxxxxxxxxx >>> Betreff: Re: Squid3: 100 % CPU load during object caching >>> On 22/07/2015 21:59, Eliezer Croitoru wrote: >>>> Hey Jens, >>>> >>>> I have tested the issue with LARGE ROCK and not AUFS or UFS. >>>> Using squid or not my connection to the server is about 2.5 MBps (20Mbps). >>>> Squid is sitting on an intel atom with SSD drive and on a HIT case the >>>> download speed is more then doubled to 4.5 MBps(36Mbps). >>>> I have not tried it with AUFS yet. >>> >>> >>> And I must admit that AUFS beats rock cache with speed. >>> I have tried rock with basic "cache_dir rock /var/spool/squid 8000" vs >>> "cache_dir aufs /var/spool/squid 8000 16 256" and the aufs cache HIT >>> results more then doubles 3 the speed rock gave with default settings. >>> >>> So about 15MBps which is 120Mbps. >>> I do not seem to feel what Jens feels but the 100% CPU might be because >>> of spinning disk hangs while reading the file from disk. >>> >>> Amos, I remember that there were some suggestions how to tune large rock. >>> Any hints? >>> I can test it and make it a suggestion for big files. >>> >>> Eliezer >>> >>> _______________________________________________ >>> squid-users mailing list >>> squid-users@xxxxxxxxxxxxxxxxxxxxx >>> http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users][http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users]][http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users][http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users]]] >>> _______________________________________________ >>> squid-users mailing list >>> squid-users@xxxxxxxxxxxxxxxxxxxxx >>> http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users][http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users]][http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users][http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users]]] >>> > _______________________________________________ > squid-users mailing list > squid-users@xxxxxxxxxxxxxxxxxxxxx > http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users][http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users]] > _______________________________________________ > squid-users mailing list > squid-users@xxxxxxxxxxxxxxxxxxxxx > http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users] > _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users