Search squid archive

Asking for advice on obtaining a better throughput.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I am using one box (4Gb RAM, modern multicore CPU) for a mono-instance proxy-only (non-caching) squid 3.1.9 version, serving about 2500 clients .

CPU is never over 30%, and vmstat output does not show any swapping.

1) The configuration of the instance is very simple indeed:

# ----------------------------------
# Recommended minimum configuration:
# Recommended access permissions, acls, ... etc...
........
........
#
# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

cache_dir aufs /san/cache 241904 64 512 <- NOt used
cache deny all

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320

# More rules
cache_mem 512 MB
# ----------------------------------

2) TCP is also tuned for performace, timeout optimization, etc..

3) On a high load moment I invoke the following command.

squidclient -p 8080 mgr:60min

which shows:

sample_start_time = 1294827593.936461 (Wed, 12 Jan 2011 10:19:53 GMT)
sample_end_time = 1294831195.510801 (Wed, 12 Jan 2011 11:19:55 GMT)
client_http.requests = 176.550847/sec
client_http.hits = 0.000000/sec
client_http.errors = 0.337075/sec
client_http.kbytes_in = 401.282290/sec
client_http.kbytes_out = 4950.528662/sec
client_http.all_median_svc_time = 0.121063 seconds
client_http.miss_median_svc_time = 0.121063 seconds
client_http.nm_median_svc_time = 0.000000 seconds
client_http.nh_median_svc_time = 0.000000 seconds
client_http.hit_median_svc_time = 0.000000 seconds
server.all.requests = 175.215042/sec
server.all.errors = 0.000000/sec
server.all.kbytes_in = 4939.614269/sec
server.all.kbytes_out = 226.921319/sec
server.http.requests = 167.784681/sec
server.http.errors = 0.000000/sec
server.http.kbytes_in = 4441.452123/sec
server.http.kbytes_out = 167.159676/sec
server.ftp.requests = 0.033319/sec
server.ftp.errors = 0.000000/sec
server.ftp.kbytes_in = 24.191365/sec
server.ftp.kbytes_out = 0.001944/sec
server.other.requests = 7.397043/sec
server.other.errors = 0.000000/sec
server.other.kbytes_in = 473.970780/sec
server.other.kbytes_out = 59.759977/sec
icp.pkts_sent = 0.000000/sec
icp.pkts_recv = 0.000000/sec
icp.queries_sent = 0.000000/sec
icp.replies_sent = 0.000000/sec
icp.queries_recv = 0.000000/sec
icp.replies_recv = 0.000000/sec
icp.replies_queued = 0.000000/sec
icp.query_timeouts = 0.000000/sec
icp.kbytes_sent = 0.000000/sec
icp.kbytes_recv = 0.000000/sec
icp.q_kbytes_sent = 0.000000/sec
icp.r_kbytes_sent = 0.000000/sec
icp.q_kbytes_recv = 0.000000/sec
icp.r_kbytes_recv = 0.000000/sec
icp.query_median_svc_time = 0.000000 seconds
icp.reply_median_svc_time = 0.000000 seconds
dns.median_svc_time = 0.030792 seconds
unlink.requests = 0.000000/sec
page_faults = 0.000278/sec
select_loops = 7354.368257/sec
select_fds = 6158.774443/sec
average_select_fd_period = 0.000000/fd
median_select_fds = 0.000000
swap.outs = 0.000000/sec
swap.ins = 0.000000/sec
swap.files_cleaned = 0.000000/sec
aborted_requests = 3.468483/sec
syscalls.disk.opens = 0.187418/sec
syscalls.disk.closes = 0.185474/sec
syscalls.disk.reads = 0.000000/sec
syscalls.disk.writes = 0.101622/sec
syscalls.disk.seeks = 0.000000/sec
syscalls.disk.unlinks = 0.101622/sec
syscalls.sock.accepts = 73.385407/sec
syscalls.sock.sockets = 63.842636/sec
syscalls.sock.connects = 63.166821/sec
syscalls.sock.binds = 63.842636/sec
syscalls.sock.closes = 102.078415/sec
syscalls.sock.reads = 3467.104333/sec
syscalls.sock.writes = 2584.626089/sec
syscalls.sock.recvfroms = 20.185062/sec
syscalls.sock.sendtos = 12.178285/sec
cpu_time = 709.070205 seconds
wall_time = 3601.574340 seconds
cpu_usage = 19.687785%

So, the system is serving 175 Request per Second (60 min. average), if I am understanding well the output.

4)  Looking the actual connections, I found (extracted from netstat output)
      8 CLOSE_WAIT
      3 CLOSING
   4216 ESTABLISHED
    100 FIN_WAIT1
     24 FIN_WAIT2
     45 LAST_ACK
      5 LISTEN
     43 SYN_RECV
     46 SYN_SENT
   2273 TIME_WAIT

there are many connections, but not so many that cannot be handled for the system, I guess.

However, network throughput is about 50 Mbps. Let us look, iptraf output (general statistic for my 1 Gb eth0 interface):

Total rates: 92146,8 kbits/sec and 13393.6 packets/sec
Incoming rates: 43702,1 kbits/sec and 7480 packets/sec
Outgoing rates: 48464,k ktis/sec and 5913.6 packets/sec

The users, however, does not get a really good navigation experience when reaching these values (on high load moments).. and I am not able to discover where the bottleneck is (Squid itself, SO, network, drivers, parameter tuning)..

Any idea on how to get a better throughput with this equipment. Any idea about subsytems configuration, or general configuration of squid software?

Thank you in advance for your help, and congratulations for your great work..

--
Víctor J. Hernández





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux