Search squid archive

Re: Long running squid proxy slows way down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amos Jeffries wrote:
Seann Clark wrote:
All,

I am looking for ideas on ways to avoid this, as the tuning guides I have found lead me all over the place. What I am seeing is over time the cache starts to slow down from being lightning fast to being ok, to it taking 1-3 minutes to decide to load, and I know it is tunable on this side. Usually this is fixed by a restart of squid, and everything is happy for a variable time frame. I have a tiny user base (on average 2 people) since this is on a home system.




What I have:

Squid Cache: Version 2.6.STABLE22

2.7 is 5-10% faster than 2.6.
This is a lazy install, I forgot to mention, YUM install VIA Fedora 9. I may if this is one of those that are remaining, spin my own with the suggestions here.

configure options: '--build=i386-redhat-linux-gnu' '--host=i386-redhat-linux-gnu' '--target=i386-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--includedir=/usr/include' '--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--localstatedir=/var' '--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-epoll' '--enable-snmp' '--enable-removal-policies=heap,lru' '--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl' '--with-openssl=/usr/kerberos' '--enable-delay-pools' '--enable-linux-netfilter' '--with-pthreads' '--enable-ntlm-auth-helpers=SMB,fakeauth' '--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-digest-auth-helpers=password' '--with-winbind-auth-challenge'

'--enable-useragent-log' '--enable-referer-log'

Disable all these special logs if not being actively used...

'--disable-dependency-tracking' '--enable-cachemgr-hostname=localhost' '--enable-underscores' '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL' '--enable-cache-digests' '--enable-ident-lookups' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--with-large-files' '--enable-follow-x-forwarded-for' '--enable-wccpv2' '--with-maxfd=16384' '--enable-arp-acl' 'build_alias=i386-redhat-linux-gnu' 'host_alias=i386-redhat-linux-gnu' 'target_alias=i386-redhat-linux-gnu' 'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables' 'LDFLAGS=-pie'



Hardware:
2x 2.0 Ghz Xeon
2.0 GB RAM
3ware SATA RAID, Raid 5 across 4 discs.
Fedora 9, ext3 filesystem

There are people here who disagree, but IMO unless you are running high-end hardware RAID. Kill it. Squid data is not that critical. Better to use one cache_dir per physical disc, regardless of the disk size.

For speed tuning its worth getting some software that measures I/O wait and see how much and what app is dong it.

I didn't mention this but this server is home to a firewall and IDS subsystem as well, in order to protect some data left on there I set it up to prevent data loss. If need be I can cram another large dedicated disk into the server since I do have room.

config items:

ipcache_size 4096

fqdncache_size is paired with this, you might need to raise it as well.

ipcache_low 90
# ipcache_high 95
ipcache_high 95
cache_mem 1024 MB
# cache_swap_low 90
cache_swap_low 90
# cache_swap_high 95
cache_swap_high 95

For cache >1GB the difference of 5% between high/low can mean long periods spent garbage-collecting the disk storage. This is a major drag. You can shrink the gap if you like less disk delay there.

cache_dir diskd /var/spool/squid 40960 93 256 Q1=72 Q2=64

AUFS is around 10x faster than diskd on Linux. Give it a try.
I will see how that works out on my system

memory_pools_limit 150 MB
store_avg_object_size 70 KB
store_objects_per_bucket 60
digest_swapout_chunk_size 202907 bytes
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern .               0       20%     4320
request_body_max_size 7 MB
memory_replacement_policy heap LFUDA

I also have a redirector in place, squidGuard, and set to use 15 child processes to attempt to speed up that section a little more to some degree of success.

Check the stats for load on each of those children. If you are getting _any_ (>0) load on the last one, increase the number.


Any suggestions would be appreciated.

squidGuard may not be possible. But use concurrency where you are able to. It's several orders of magnitude lighter on resources and faster.


Additionally, check the network pipe capacity. If its full you might need to use 2 NIC to separate inbound/ outbound.

A single tuned instance of Squid has been known to push the limits of a 50 Mbps external link. On collapsed forwarding cache hits it can even push past a 100Mbps.
I have a single 1GB/s link on this server inbound from the clients then the 10 meg pipe out to the cable provider (The card is gig, the modem is 100Meg, the pipe is around 10 megs) I can team the inside NIC since it is a four port card though.

Amos

<<attachment: smime.p7s>>


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux